text
stringlengths 4
222k
| label
int64 0
4
|
---|---|
Grâce aux avancées scientifiques et technologiques de ces dernières années, les systèmes de Reconnaissance Automatique de la Parole (RAP), qui permettent de transcrire de la parole en texte, sont maintenant utilisés dans de nombreuses applications liées au Traitement Automatique du Langage (TAL). Les systèmes de RAP ont notamment profité de l'augmentation massive des données disponibles, en particulier pour l'apprentissage de leurs modèles (Baevski et al., 2020) , ainsi que des approches par apprentissage profond (Deng et al., 2013; Amodei et al., 2016) . D'un point de vue applicatif, plusieurs contextes d'utilisation sont possibles : une transcription automatique peut être soit utilisée directement (e.g. pour le sous-titrage automatique), soit être une partie (souvent en entrée) d'un autre système (e.g. dialogue humain-machine, indexation automatique de documents audio, etc.). Malgré les avancées actuelles, des erreurs dans les transcriptions automatiques sont inévitables et impactent son utilisation : par exemple, les erreurs de transcription peuvent se répercuter sur les applications qui les utilisent, et donc influer négativement sur leurs performances, ou encore rendre la compréhension des transcriptions difficile par un humain.Les systèmes de RAP sont principalement évalués avec la métrique du taux d'erreur-mot (WER pour Word Error Rate). Cette métrique possède l'avantage d'être simple à mettre en place, car elle ne nécessite d'avoir qu'une transcription de référence (i.e. annotée manuellement) des mots. Elle est néanmoins limitée dans le sens où aucune autre information que le mot lui-même n'est intégrée (e.g. aucune information linguistique n'est prise en compte, pas de connaissance sémantique, etc.). Chaque erreur a également le même poids au sein de cette métrique alors même que l'on sait, sachant la tâche visée, que les mots ont une importance différente au sein d'une transcription textuelle (Morchid et al., 2016) . Ces limites ont déjà été exposées dans le passé, avec des variantes proposées comme par exemple le IWER (Mdhaffar et al., 2019) , qui se concentre notamment sur les mots choisis comme importants dans une transcription.Dans cet article, nous étudions un ensemble de mesures automatiques pour aider à l'évaluation des systèmes de RAP, en particulier sur les aspects liés au langage. Ces mesures doivent permettre une analyse plus fine des erreurs de transcription, en mettant en avant certains aspects des erreurs (classes morphosyntaxiques, erreurs en contexte, distance sémantique, etc.). Un des avantages de ces mesures est qu'elles ne nécessitent aucune annotation manuelle supplémentaire des transcriptions et peuvent s'appliquer à n'importe quelle langue. De plus, leur multiplication permet de mettre en avant des visions différentes des erreurs, ces métriques pouvant alors se compléter. Nous proposons ensuite une analyse qualitative au moyen de ces mesures sur un système de RAP à l'état de l'art, en décrivant plus finement l'apport du réordonnancement des hypothèses a posteriori (rescoring) par un modèle de langage (ML) quadri-gramme couplé à un ML utilisant des réseaux de neurones récurrents (RNNLM). L'article est organisé comme suit. Dans la partie 2, nous décrivons la métrique classique du WER, puis nous listons et détaillons les différentes mesures automatiques que nous proposons pour permettre une évaluation plus fine des transcriptions au niveau linguistique. Afin de comprendre l'intérêt de l'utilisation de ces mesures, une analyse qualitative est proposée, en détaillant tout d'abord le protocole expérimental dans la partie 3, puis les résultats et analyses dans la partie 4. Enfin, une conclusion ainsi que des perspectives sont fournies dans la partie 5. | 0 |
Generative language models produce likely text based on a context of other text. This process has a surprising number of useful applications, one of which is answering questions about a text passage. By training on text that contains (among other data) passages followed by questions and answers about the passage, and creating a context in which the answer to a question is the most likely continuation of the passage, better text prediction tends to result in better question answering.However, there are many types of questions that could be asked about a passage, from direct questions about facts, to explorations of its themes, to imagining what would happen if something about the story was completely different. Depending on the type of question, the accuracy of answers can vary greatly. While some tests of the questionanswering ability of language models have been run, it is difficult to separate out whether a question is answered incorrectly because of changes to the question, or changes to the passage, or both. Therefore, in this paper, we selected a single passage and asked a wide variety of questions about the passage. The response of a particular model to any one of these types of questions could be explored much New York Times, 29 September 1973 A 61-year old furniture salesman was pushed down the shaft of a freight elevator yesterday in his downtown Brooklyn store by two robbers while a third attempted to crush him with the elevator car because they were dissatisfied with the $1,200 they had forced him to give them. The buffer springs at the bottom of the shaft prevented the car from crushing the salesman, John J. Hug, after he was pushed from the first floor to the basement. The car stopped about 12 inches above him as he flattened himself at the bottom of the pit. Mr. Hug was pinned in the shaft for about half an hour until his cries attracted the attention of a porter. The store at 340 Livingston Street is part of the Seaman's Quality Furniture chain. Mr. Hug was removed by members of the Police Emergency Squad and taken to Long Island College Hospital. He was badly shaken, but after being treated for scrapes of his left arm and for a spinal injury was released and went home. He lives at 62-01 69th Lane, Maspeth, Queens. He has worked for seven years at the store, on the corner of Nevins Street, and this was the fourth time he had been held up in the store. The last time was about one year ago, when his right arm was slashed by a knife-wielding robber. Table 1 : Full text of story in McCarthy (1990) . more extensively on a larger dataset with more passages. Here, we are simply making a first survey of the possibilities. 1 | 0 |
Chinese characters are used both in Japanese and Chinese. In Japanese the Chinese characters are called Kanji, while in Chinese they are called Hanzi. Hanzi can be divided into two groups, Simplified Chinese (used in mainland China and Singapore) and Traditional Chinese (used in Taiwan, Hong Kong and Macao). The number of strokes needed to write characters have been largely reduced in Simplified Chinese, and the shapes may be different from the ones in Traditional Chinese. Table 1 gives some examples of Chinese Characters in Japanese, Traditional Chinese and Simplified Chinese.Because Kanji characters are originated from ancient China, there exist common Chinese characters between Kanji and Hanzi. Actually, the visual forms of the Chinese characters retain certain level of similarity, and many Kanji are identical to Simplified Chinese (e.g. "snow" and "country" in Table 1 ), some Kanji are identical to Traditional Chinese but different from Simplified Chinese (e.g. "love" in Table 1), but there also exist some visual variations in Kanji (e.g. "begin" and "hair" in Table 1 ).On the other hand, Chinese characters contain significant semantic information, and the meanings do not change in most cases between characters in different shapes. For example, the shapes of three characters " ", " " and " " in Table 1 are quite different, but all of them have the same meaning "begin".Based on the characteristics of Kanji and Hanzi described above, we thought that Kanji and Hanzi information may be valuable in machine translation, especially in word/phrase alignment between Japanese and Chinese. Parallel sentences contain equivalent meanings in each language, and we can assume common Chinese characters appear in the sentences. In this paper, we focus on word/phrase alignment between Japanese and Simplified Chinese, where common Chinese characters often have different shapes and it is hard to detect them automatically. We accomplish the detection by means of freely available resources. In addition, we incorporate common Chinese characters information into a joint phrase alignment model on dependency trees. | 0 |
The goal of this paper is to provide a basic account of conditional yes/no responses (CRs): We describe the conditions under which CRs are appropriate, and how these conditions translate into a uniform approach to understanding and producing CRs. 1 We focus on information-seeking dialogues between a human user and a dialogue system in the travel domain. We allow for mixed initiative and negotiation to let a dialogue be more collaborative than "quizzing". In this context CRs arise naturally (1).(1) U.1: Do I need a visa to enter the U.S.?S.1: Not if you are an EU citizen.(2) S.1': Yes, if you are not an EU citizen.(1:S.1) is an example of a negative CR, asserting If you're an EU citizen, then you do not need a visa to enter the U.S. An alternative, positive CR is (2:S.1'), asserting If you're not an EU citizen, then you do need a visa to enter the U.S.. In both cases, the system answers the question (1:U.1), but it makes the answer conditional on the value of a particular attribute (here, citizenship). 1 This work was done in SIRIDUS (Specification, Interaction and Reconfiguration in Dialogue Understanding Systems), EC Project IST-1999-10516 . We would like to thank Geert-Jan Kruijff for detailed discussion and comments .Moreover, the CR suggests that, for another value, the answer may be different (2).The CRs in (1:S.1) and (2:S.1') are elliptical utterances. Intuitively, they can be expanded to the complete propositions in (3) and (3 ). The material for resolving the ellipsis comes from the immediately preceding context. In the approach we work with, ellipsis is resolved with respect to the current question under discussion (QUD, (Ginzburg, 1996) ).(3) No, you don't need a visa to enter the U.S. if you are an EU citizen.(3 ) Yes, you do need a visa to enter the U.S. if you are not an EU citizen.The dialogue move of a CR depends on the context. Consider (4) and (5). Similarly to (1), in (4) the system does not know an attribute-value (A/V) on which the positive or the negative answer to the yes/no question is contingent (here, whether the user wants a business or economy class flight). The system's CR (4:S.2) is a request for further information: whether the user wants a business flight (Monday is out), or does not (she is able to fly on Monday). Likewise, (4:S.2 ) is a request for further information whether the user wants an economy flight (Monday is available), or not (Monday is out).Dialogue (5) is different. Now the user indicates that she is interested in a business class flight (5:U.1). The system by default assumes that this remains unchanged for another day of travel.What both the negative and positive CR in (5) do is to start a negotiation to either confirm or revise the user's decision for business class. The system's response (5:S.2) or (5:S.2 ) indirectly proposes a change (to economy class) to achieve the higherlevel goal of finding a flight from Köln to Paris on Monday. If the user insists on business class, this goal cannot be achieved.If we want a dialogue system to understand and appropriately produce CRs, we need to describe their semantics in terms of the contextual conditions and communicative goals under which these responses occur, and the effects they have on the dialogue context. We aim at providing the basis of an account that can be implementated in the GoDiS dialogue system. GoDis is an experimental system in the travel domain, using the information-state approach to dialogue developed the TRINDI and SIRIDUS projects (Cooper et al., 1999; Lewin et al., 2000) . We focus on aspects that can improve its flexibility and functionality.Overview. In §2 we discuss the uses of positive and negative CRs in terms of their appropriateness conditions and their interpretation. In §3 we discuss dialogue moves. We end the paper with conclusions. | 0 |
Dependency parsers are major components of a large number of NLP applications. As application models are applied to constantly growing amounts of data, efficiency becomes a major consideration.In graph-based dependency parsing models (Eisner, 2000; McDonald et al., 2005a; McDonald et al., 2005b; Carreras, 2007; Koo and Collins, 2010b) , given an n word sentence and a model order k, the run time of exact inference is O(n 3 ) for k = 1 and O(n k+1 ) for k > 1 in the projective case (Eisner, 1996; McDonald and Pereira, 2006) . In the non-projective case it is O(n 2 ) for k = 1 and NP-hard for k ≥ 2 (McDonald and Satta, 2007) . 1 Consequently, a number of approximated parsers have been introduced, utilizing a variety of techniques: the Eisner algorithm (McDonald and Pereira, 2006) , belief propagation (Smith and Eisner, 2008) , dual decomposition (Koo and Collins, 2010b; Martins et al., 2013) and multi-commodity flows (Martins et al., 2009; Martins et al., 2011) . The run time of all these approximations is superlinear in n.Recent pruning algorithms for graph-based dependency parsing (Rush and Petrov, 2012; Riedel et al., 2012; Zhang and McDonald, 2012) have shown to cut a very large portion of the graph edges, with minimal damage to the resulting parse trees. For example, Rush and Petrov (2012) demonstrated that a single O(n) pass of vinepruning (Eisner and Smith, 2005) can preserve > 98% of the correct edges, while ruling out > 86% of all possible edges. Such results give strong motivation to solving the inference problem in a run time complexity that is determined solely by the number of edges (m). 2 In this paper we propose to formulate the inference problem in first-order (arc-factored) dependency parsing as a minimum spanning tree (MST) problem in an undirected graph. Our formulation allows us to employ state-of-the-art algorithms for the MST problem in undirected graphs, whose run time depends solely on the number of edges in the graph. Importantly, a parser that employs our undirected inference algorithm can generate all possible trees, projective and non-projective.Particularly, the undirected MST problem ( § 2) has a randomized algorithm which is O(m) at expectation and with a very high probability ((Karger et al., 1995) ), as well as an O(m • α(m, n)) worst-case deterministic algorithm (Pettie and Ramachandran, 2002) , where α(m, n) is a certain natural inverse of Ackermann's function (Hazewinkel, 2001) . As the inverse of Ackermann's function grows extremely slowly 3 the deterministic algorithm is in practice linear in m ( § 3). In the rest of the paper we hence refer to the run time of these two algorithms as practically linear in the number of edges m.Our algorithm has four steps ( § 4). First, it encodes the first-order dependency parsing inference problem as an undirected MST problem, in up to O(m) time. Then, it computes the MST of the resulting undirected graph. Next, it infers a unique directed parse tree from the undirected MST. Finally, the resulting directed tree is greedily improved with respect to the directed parsing model. Importantly, the last two steps take O(n) time, which makes the total run time of our algorithm O(m) at expectation and with very high probability. 4 We integrated our inference algorithm into the first-order parser of (McDonald et al., 2005b ) and compared the resulting parser to the original parser which employs the Chu-Liu-Edmonds algorithm (CLE, (Chu and Liu, 1965; Edmonds, 1967) ) for inference. CLE is the most efficient exact inference algorithm for graph-based first-order nonprojective parsers, running at O(n 2 ) time. 5 jointly performed in O(n) steps. We therefore do not include initial graph construction and pruning in our complexity computations.3 α(m, n) is less than 5 for any practical input sizes (m, n).4 The output dependency tree contains exactly n−1 edges, therefore m ≥ n − 1, which makes O(m) + O(n) = O(m).5 CLE has faster implementations: O(m+nlogn) (Gabow et al., 1986) as well as O(mlogn) for sparse graphs (Tarjan, 1977) , both are super-linear in n for connected graphs. We re-We experimented ( § 5) with 17 languages from the CoNLL 2006 and 2007 shared tasks on multilingual dependency parsing (Buchholz and Marsi, 2006; Nilsson et al., 2007) and in three English setups. Our results reveal that the two algorithms perform very similarly. While the averaged unlabeled attachment accuracy score (UAS) of the original parser is 0.97% higher than ours, in 11 of 20 test setups the number of sentences that are better parsed by our parser is larger than the number of sentences that are better parsed by the original parser.Importantly, in this work we present an edge-linear first-order dependency parser which achieves similar accuracy to the existing one, making it an excellent candidate to be used for efficient MST computation in k-best trees methods, or to be utilized as an inference/initialization subroutine as a part of more complex approximation frameworks such as belief propagation. In addition, our model produces a different solution compared to the existing one (see Table 2 ), paving the way for using methods such as dual decomposition to combine these two models into a superior one.Undirected inference has been recently explored in the context of transition based parsing (Gómez-Rodríguez and Fernández-González, 2012; Gómez-Rodríguez et al., 2015) , with the motivation of preventing the propagation of erroneous early edge directionality decisions to subsequent parsing decisions. Yet, to the best of our knowledge this is the first paper to address undirected inference for graph based dependency parsing. Our motivation and algorithmic challenges are substantially different from those of the earlier transition based work. | 0 |
Les forums permettent à leurs participants de poser des questions et d'interagir avec les autres pour obtenir des réponses pertinentes. La popularité des forums montre la capacité de ce type d'interactions à produire des réponses pertinentes aux questions. Leur popularité est telle que le premier réflexe d'un internaute, lorsqu'il se pose une question, est de faire appel à son moteur de recherche préféré, pour vérifier si une question similaire n'a pas déjà été posée et résolue. Pour répondre de façon pertinente à cette requête, il faudrait pouvoir mesurer une similarité entre la nouvelle question posée et les questions déjà postées sur le forum, qui tienne compte de la sémantique et pas seulement du nombre de mots en communs entre les questions, sur quoi s'appuient les moteurs de recherche usuels.Dans le cadre de la campagne d'évaluation des technologies sémantiques SemEval 2017, la tâche Community Question Answering, est consacrée à l'analyse des questions et leurs fils de discussions dans les forums (Nakov et al., 2017) . Une sous-tâche traite précisément de la similarité entre questions : il s'agit pour une question, dite originale, de trier par similarité sémantique décroissante les 10 questions, dites relatives, remontées par un moteur de recherche sur ce forum. Cette tâche peut être vue comme une pure tâche de similarité sémantique entre textes, sur des données textuelles bruitées générées par des utilisateurs, à la différence d'une autre tâche de similarité textuelle (Agirre et al., 2016) , qui traite de textes courts bien formés. Dans la tâche Community Question Answering, le corpus d'étude est issu du forum en anglais Qatar Living, dans lequel les expatriés occidentaux discutent de tous les aspects de la vie quotidienne au Qatar (où pratiquer le basket, comment faire pour embaucher une nounou,...). Cette tâche est apparue dans SemEval en 2016 (Nakov et al., 2016) , et s'est poursuivie en 2017. Les approches proposées en 2016 étaient principalement des fusions tardives de mesures de similarités supervisées ou non. De nombreuses mesures non-supervisées étaient basées sur le comptage d'éléments en commun entre les questions, ces éléments pouvant être des n-grams de lettres ou de mots, ou des composants de plus haut niveau, comme des entités nommées, des roles sémantiques ou des portions d'arbres syntaxiques (e.g. (Franco-Salvador et al., 2016) ). Les plongements de mots ont également été beaucoup utilisés (e.g. (Mihaylov & Nakov, 2016) ), souvent simplement moyennés au niveau de la question, et utilisé dans une mesure en cosinus ou en entrée d'un classifieur neuronal.Les hypothèses de travail que nous avons prises pour cette tâche sont les suivantes : nous avons considéré que les données de forum étaient trop bruitées pour obtenir des sorties fiables de nos outils d'analyse linguistiques, et nous avons souhaité nous focaliser sur la similarité sémantique entre textes. C'est pourquoi nous n'avons utilisé aucune analyse de méta-données (issues des dates, profils utilisateurs etc...), afin d'obtenir des résultats qui pourraient se généraliser à d'autres tâches de similarité sémantique de textes. Ainsi, nous avons développé des mesures de similarité sémantiques entre textes, sans ressources externes, presque sans pré-traitements linguistiques, reposant uniquement sur la disponibilité d'un gros corpus non-annoté représentatif des données.Contrairement à l'article à paraître lors de la conférence SemEval (Charlet & Damnati, 2017), où nous décrivons en détail l'optimisation de la combinaison supervisée de nos métriques non-supervisées, nous nous concentrons ici sur l'analyse des métriques non supervisées. La suite de cet article est organisée de la façon suivante : dans la section 2, la mesure de similarité non-supervisée est présentée, tandis que des mesures alternatives sont présentées dans la section 3. Les résultats sont présentés et discutés dans la section 4.2 Mesure de similarité soft-cosine | 0 |
It is generally believed that a generation system can be modularized into a sequence of components, the first one making the "high level" decisions (i.e. the conceptual decisions), the following ones making the linguistic decisions (e.g. lexical and syntactic construction choices), the penultimate, one performing the "low level" operations (i.e. the syntactic operations), and the last one handling the morphological operations. We have shown in (L. DanIos1985, 1987a ) that the conceptual and linguistic decisions are operations that depend on each other. Therefore, we designed a generation system modularized in the following way: a "strategic component" makes the conceptual and linguistic decisions simultaneously and gives back "clause templates" which are synthesized into clauses by a "syntactic component". A simplified version of the clause template syntax is the following one (a more complete version is presented in (L. Danlos 1987b) [prgp] = (:pr6p preposition ) The prepositional complements JR-object] and [de-object] are complements respectively introduced by ~, and de in French, a and di in Italian. They are separated from the prepositional complements [prgp-object] introduced by other prepositions because they have a specific syntactic behaviour, especially in regard to pronominalization (cf. 3). An example of a clause template is : CC1 (:subject HUM1) (:verb amare ) (:dir-object HUM2)) with HUM1 =: PERSON HUM2 =: PERSON NAME : Idgo NAME : Maria SEX : mast SEX : fern According to the context (i.e. fire clause templates that have been previously synthesized), the syntactic component, which handles pronominalization, produces one of the following Italian clauses (given that the verb is in the present tense) :Ugo ama Maria (Ugo loves Mary) Ugo l'ama (Ugo loves her) Quest 'uomo l'ama (This man loves her) Ama questa donna (He loves this woman) It will be shown in 3 that pronominalization involves the morphological level. Tire decisions concerning pronominalization, which is a stumbling block for natural language processing, must certainly not be made last. Thus, the morphological level (level supposedly very "low") must not be taken into account only at the very last stage of the generation process.The second aim of this paper is to put forward "non local dependencies" which are to be found when the synthesis of an element X depends upon that of another element Y. Such a dependency requires the synthesis of X to be carried out after that of Y, whatever the order of X and Y in the clause template. Moreover, cases of "cross dependencies" are to be found when the synthesis of X depends upon that of Y and when the synthesis of Y depends upon that of X. A cross dependency leads to conflicting orderings, namely synthesis of X after that of Y and synthesis of Y after that of X. The solution to such conflicting orderings is to perform a sequence of incomplete syntheses of X and Y. | 0 |
Transformer-based neural summarization models (Liu and Lapata, 2019; Stiennon et al., 2020; Xu et al., 2020b; Desai et al., 2020) , especially pretrained abstractive models like BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020) , have made great strides in recent years. These models demonstrate exciting new capabilities in terms of abstraction, but little is known about how these models work. In particular, do token generation decisions leverage the source text, and if so, which parts? Or do these decisions arise based primarily on knowledge from the language model (Jiang et al., 2020; Carlini et al., 2020) , learned during pre-training or fine-tuning? Having tools to analyze these models is crucial to identifying and forestalling problems in generation, such as toxicity (Gehman et al., 2020) or factual errors (Kryscinski et al., 2020; Durrett, 2020, 2021) .Although interpreting classification models for NLP has been widely studied from perspectives like feature attribution (Ribeiro et al., 2016; Sundararajan et al., 2017) and influence functions (Koh and Liang, 2017; Han et al., 2020) , summarization specifically introduces some additional elements that make these techniques hard to apply directly. First, summarization models make sequential decisions from a very large state space. Second, encoder-decoder models have a special structure, featuring a complex interaction of decoder-side and encoder-side computation to select the next word. Third, pre-trained LMs blur the distinction between relying on implicit prior knowledge or explicit instance-dependent input.This paper aims to more fully interpret the stepwise prediction decisions of neural abstractive summarization models. 1 First, we roughly bucket generation decisions into one of several modes of generation. After confirming that the models we use are robust to seeing partial inputs, we can probe the model by predicting next words with various model ablations: a basic language model with no input (LM ∅ ), a summarization model with no input (S ∅ ), with part of the document as input (S part ), and with the full document as input (S full ). These ablations tell us when the decision is context-independent (generated in an LM-like way), when it is heavily context-dependent (generated from the context), and more. We map these regions in Figure 2 and can use these maps to coarsely analyze model behavior. For example, 17.6% of the decisions on Figure 1 : Our two-stage ablation-attribution framework. First, we compare a decoder-only language model (not fine-tuned on summarization task, and not conditioned on the input article) and a full summarization model. They are colored in gray and orange respectively. the The higher the difference, the more heavily model depends on the input context. For those context-dependent decisions, we conduct content attribution to find the relevant supporting content with methods like Integrated Gradient or Occlusion.XSum are in the lower-left corner (LM-like), which means they do not rely much on the input context.Second, we focus on more fine-grained attribution of decisions that arise when the model does rely heavily on the source document. We carefully examine interpretations based on several prior techniques, including occlusion (Zeiler and Fergus, 2014), attention, integrated gradients (Sundararajan et al., 2017) , and input gradients (Hechtlinger, 2016) . In order to evaluate and compare these methods, we propose a comprehensive evaluation based on presenting counterfactual, partial inputs to quantitatively assess these models' performance with different subsets of the input data.Our two-stage analysis framework allows us to (1) understand how each individual decision depends on context and prior knowledge (Sec 3), (2) find suspicious cases of memorization and bias (Sec 4), (3) locate the source evidence for context dependent generation (Sec 5). The framework can be used to understand more complex decisions like sentence fusion (Sec 6). | 0 |
Recent deep neural network successes rekindled classic debates on their natural language processing abilities (e.g., Kirov and Cotterell, 2018; Mc-Coy et al., 2018; Pater, 2018) . and Loula et al. (2018) proposed the SCAN challenge to directly assess the ability of sequence-to-sequence networks to perform systematic, compositional generalization of linguistic rules. Their results, and those of Bastings et al. (2018) , have shown that modern recurrent networks (gated RNNs, such as LSTMs and GRUs) generalize well to new sequences that resemble those encountered in training, but achieve very low performance when generalization must be supported by a systematic compositional rule, such as "to X twice you X and X" (e.g., to jump twice, you jump and jump again).Non-recurrent models, such as convolutional neural networks (CNNs, Kalchbrenner et al., 2016; Gehring et al., 2016 Gehring et al., , 2017 and selfattentive models (Vaswani et al., 2017; Chen et al., 2018) have recently reached comparable or better performance than RNNs on machine translation and other benchmarks. Their linguistic properties are however still generally poorly understood. Tang et al. (2018) have shown that RNNs and selfattentive models are better than CNNs at capturing long-distance agreement, while self-attentive networks excel at word sense disambiguation. In an extensive comparison, Bai et al. (2018) showed that CNNs generally outperform RNNs, although the differences were typically not huge. We evaluate here an out-of-the-box CNN on the most challenging SCAN tasks, and we uncover the surprising fact that CNNs are dramatically better than RNNs at compositional generalization. As they are more cumbersome to train, we leave testing of self-attentive networks to future work. | 0 |
Metaphors are one kind of figurative language that use conceptual mapping to represent one thing (target domain) as another (source domain). As proposed by Lakoff and Johnson (1980) in Conceptual Metaphor Theory (CMT), metaphor is not only a property of the language but also a cognitive mechanism that describes our conceptual system. Thus metaphors are devices that transfer the property of one domain to another unrelated or different domain, as in 'sweet voice' (use taste to describe sound).Metaphors are prevalent in daily life and play a significant role for people to interpret/understand complex concepts. On the other hand, as a popular linguistic device, metaphors encode versatile ontological information, which usually involve e.g. domain transfer (Ahrens et al., 2003; Ahrens, 2010; Ahrens and Jiang, 2020) , sentiment reverse (Steen et al., 2010) or modality shift (Winter, 2019) etc. Therefore, detecting the metaphors in texts is essential for capturing the authentic meaning of the texts, which can benefit many natural language processing applications, such as machine translation, dialogue systems and sentiment analysis (Tsvetkov et al., 2014) . In this shared task, we aim to detect token-level metaphors from plain texts by focusing on content words (Verbs, Nouns, Adjectives and Adverbs) of two corpora: VUA 1 and TOFEL 2 . To better understand the intrinsic properties of metaphors and to provide an in-depth analysis to this phenomenon, we propose a linguisticallyenriched model to deal with this task with the use of modality exclusivity and embodiment norms (see details in Section 3). | 0 |
A revolution in the way tasks can be completed occurred when it was proposed to take a job traditionally performed by a designated employee and outsource it to an undefined large group of Internet users. This approach, called crowdsourcing (Howe, 2008) , changed traditional thinking behind methods for language resource creation and made new tasks possible that were previously inconceivable due to cost or labour limitations. For example, microworking is now a common crowdsourcing approach to create small to medium-sized language resources by engaging a crowd using small payments (Snow et al., 2008 ). An alternative approach is to use a game-with-a-purpose (GWAP) to aggregate data from non-expert players, who are motivated by entertainment, to create collective decisions similar to those from an expert (von Ahn, 2006) . Phrase Detectives, an interactive online game for creating anaphorically-annotated corpora, is an illustration of the GWAP approach for creating large-scale resources. The Phrase Detectives corpus differs from existing corpora for anaphora in two key respects: (i) it covers genres for which no other data are available, including encyclopedic and narrative text; and (ii) multiple solutions (or interpretations) are collected per task. This paper briefly describes the game and annotation scheme, before describing in more detail the measures of quality used and an analysis of a subset of the corpus which has been made available to the language resource community. | 0 |
Developing machine translation models without using bilingual parallel text is an intriguing research problem with real applications: obtaining a large volume of parallel text for many languages is hard if not impossible. Moreover, translation models could be used in downstream cross-lingual tasks in which annotated data does not exist for some languages. There has recently been a great deal of interest in unsupervised neural machine translation (e.g. Artetxe et al. (2018a) ; Lample et al. (2018a,c) ; Conneau and Lample (2019) ; Song et al. (2019a) ; ; Tae et al. (2020) ). Unsupervised neural machine translation models often perform nearly as well as supervised models when translating between similar languages, but they fail to perform well in low-resource or distant languages or out-of-domain monolingual data (Marchisio et al., 2020) . In practice, the highest need for unsupervised models is to expand beyond high resource, similar European language pairs.There are two key goals in this paper: Our first goal is developing accurate translation models for low-resource distant languages without any supervision from a supervised model or gold-standard parallel data. Our second goal is to show that our machine translation models can be directly tailored to downstream natural language processing tasks. In this paper, we showcase our claim in cross-lingual image captioning and cross-lingual transfer of dependency parsers, but this idea is applicable to a wide variety of tasks.We present a fast and accurate approach for learning translation models using Wikipedia. Unlike unsupervised machine translation that solely relies on raw monolingual data, we believe that we should not neglect the availability of incidental supervisions from online resources such as Wikipedia. Wikipedia contains articles in nearly 300 languages and more languages might be added in the future, including indigenous languages and dialects of different regions in the world. Different from similar recent work (Schwenk et al., 2019a) , we do not rely on any supervision from supervised translation models. Instead, we leverage the fact that many first sentences in linked Wikipedia pages are rough glish in which the titles, first sentences, and also the image captions are rough translations of each other.Our method learns a seed bilingual dictionary from a small collection of first sentence pairs, titles and captions, and then learns cross-lingual word embeddings. We make use of cross-lingual word embeddings to extract parallel sentences from Wikipedia.Our experiments show that our approach improves over strong unsupervised translation models for low-resource languages: we improve the BLEU score of English!Gujarati from 0.6 to 15.2, and English!Kazakh from 0.8 to 12.1.In the realm of downstream tasks, we show that we can easily use our translation models to generate high-quality translations of MS-COCO (Chen et al., 2015) and Flickr (Hodosh et al., 2013) datasets, and train a cross-lingual image captioning model in a multi-task pipeline paired with machine translation in which the model is initialized by the parameters from our translation model. Our results on Arabic captioning show a BLEU score of 5.72 that is slightly better than a supervised captioning model with a BLEU score of 5.22. As another task, in dependency parsing, we first translate a large amount of monolingual data using our translation models and then apply transfer using the annotation projection method (Yarowsky et al., 2001; Hwa et al., 2005) . Our results show that our approach performs similarly compared to using gold-standard parallel text in high-resource scenarios, and significantly better in low-resource languages.A summary of our contribution is as follows:• We propose a simple, fast and effective ap-proach towards using the Wikipedia monodatasets.Our code is publicly available online 1 .In this section, we briefly describe the main con-144 cepts that we repeatedly use throughout the paper. | 0 |
The output of an NLU component is called a semantic or dialog frame (Hakkani-Tür et al., 2016) . The frame consists of intents which capture information about the goal of the user and slot-labels which capture constraints that need to be satisfied in order to fulfill the users' request. For example, in Figure 1 , the intent is to book a flight (atis flight) and the slot labels are the from location, to location and the date. The intent detection task can be modeled as a classification problem and slot labeling as a sequential labeling problem.The ATIS (Airline Travel Information System) dataset (Hakkani-Tür et al., 2010) is widely used for evaluating the NLU component. We focus on complex aspects of dialog that occur in real-world scenarios but are not captured in ATIS or other alternatives such as, DSTC (Henderson et al., 2014) or SNIPS 1 . As an example, consider a reasonable user utterance, "can i get two medium veggie pizza and one small lemonade" (Figure 2A ). The intent is OrderItems. There are two items mentioned, each with three properties. The properties are the name of the item (veggie pizza, lemonade), the quantity of the item (two, one) and size of the item (medium, small). These properties need to be grouped together accurately to successfully fulfill the customer's request -the customer would not be happy with one small veggie pizza. This structure occurs to a limited extent in the ATIS dataset ( Figure 2B ), which has specific forms such as, from loc.city name and to loc.city name, which must be distinguished. However, the scale is small enough that these can be separate labels and multi-class slot-labeling approaches that predict each specific form as a separate class ( Figure 1 ) have had success. In more open domains, this hierarchy-to-multi-class conversion increases the number of classes exponentially vs. an approach that appropriately uses available structure. Further, hierarchical relationships, e.g. between fromloc and city name, are ignored, which limits the sharing of data and statistical strength across labels.The contributions of this paper are as follows:• We propose a recursive, hierarchical framebased representation that captures complex relationships between slots labels, and show how to learn this representation from raw user text. This enables sharing statistical strength across labels. Such a representation ( Figure 3 ) also allows us to include multiple intents in a single utterance (Gangadharaiah and Narayanaswamy, 2019; Kim et al., 2017; Xu and Sarikaya, 2013) .• We formulate frame generation as a templatebased tree-decoding task (Section 3). The value or positional information at each terminal (represented by a $) in the template generated by the tree decoder is predicted (or filled in) using a pointer to the tokens in the input sentence (Vinyals et al., 2015; Jia and Liang, 2016) . This allows the system to copy over slot values directly from the input utterance.• We extend (local) tree-based loss functions with global supervision (Section 3.5), optimize jointly for all loss functions end-to-end and show that this improves performance (Section 4). | 0 |
A recent study commissioned by the World Economic Forum projected that mental disorders will be the single largest health cost, with global costs increasing to $6 trillion annually by 2030 (Bloom et al., 2011) . Since mental health impacts the risk for chronic, non-communicable diseases, in a sense there is "no health without mental health" (Prince et al., 2007) . The importance of mental health has driven the search for new and innovative methods for obtaining reliable information and evidence about mental disorders. The WHO's Mental Health Action Plan for the next two decades calls for the strengthening of "information systems, evidence and research," which necessitates new development and improvements in global mental health surveillance capabilities (World Health Organization, 2013) .As a result, research on mental health has turned to web data sources (Ayers et al., 2013; Althouse et al., 2014; Yang et al., 2010; Hausner et al., 2008) , with a particular focus on social media (De Choudhury, 2014; Schwartz et al., 2013a; De Choudhury et al., 2011) . While many users discuss physical health conditions such as cancer or the flu (Paul and Dredze, 2011; Dredze, 2012; Aramaki et al., 2011; Hawn, 2009) , some also discuss mental illness. There are a variety of motivations for users to share this information on social media: to offer or seek support, to fight the stigma of mental illness, or perhaps to offer an explanation for certain behaviors.Past mental health work has largely focused on depression, with some considering post-traumatic stress disorder (Coppersmith et al., 2014b) , suicide (Tong et al., 2014; Jashinsky et al., 2014) , seasonal affective disorder, and bipolar disorder (Coppersmith et al., 2014a) . While these represent some of the most common mental disorders, it only begins to consider the range of mental health conditions for which social media could be utilized. Yet obtaining data for many conditions can be difficult, as previous techniques required the identification of affected individuals using traditional screening methods (De Choudhury, 2013; Schwartz et al., 2013b) . Coppersmith et al. (2014a) proposed a novel way of obtaining mental health related Twitter data. Using the self-identification technique of Beller et al. (2014) , they looked for statements such as "I was diagnosed with depression", automatically uncovering a large number of users with mental health conditions. They demonstrated success at both surveillance and analysis of four mental health conditions. While a promising first step, the technique's efficacy for a larger range of disorders remained untested.In this paper we employ the techniques of Coppersmith et al. (2014a) to amass a large, diverse collection of social media and associated labels of diagnosed mental health conditions. We consider the broadest range of conditions to date, many significantly less prevalent than the disorders examined previously. This tests the capacity of our approach to scale to many mental health conditions, as well as its capability to analyze relationships between conditions. In total, we present results for ten conditions, including the four considered by Coppersmith et al. (2014a) . To demonstrate the presence of quantifiable signals for each condition, we build machine learning classifiers capable of separating users with each condition from control users.Furthermore, we extend previous analysis by considering approximate age-and gender-matched controls, in contrast to the randomly selected controls in most past studies. Dos Reis and Culotta 2015found demographic controls an important baseline, as they muted the strength of the measured outcomes in social media compared to a random control group. Using demographically-matched controls allows us to clarify the analysis in conditions where age is a factor, e.g., people with PTSD tend to be older than the average user on Twitter.Using the ten conditions and control groups, we characterize a broad range of differences between the groups. We examine differences in usage patterns of categories from the Linguistic Inquiry Word Count (LIWC), a widely used psychometrically validated tool for psychology-related analysis of language Pennebaker et al., 2001) . Depression is the only condition for which considerable previous work on social media exists for comparison, and we largely replicate those previous results. Finally, we examine relationships between the language used by people with various conditions -a task for which comparable data has never before been available. By considering multiple conditions, we can measure similarities and differences of language usage between conditions, rather than just between a condition and the general population.The paper is structured as follows: we begin with a description of how we gathered and curated the data, then present an analysis of the data's coherence and the quantifiable signals we can extract from it, including a broad survey of observed differences in LIWC categories. Finally, we measure language correlations between pairs of conditions. We conclude with a discussion of some possible future directions suggested by this exploratory analysis. | 0 |
We are developing the Spoken Language Tl~ANSlation system (SL-TRANS) [1] , in which both speech recognition processing and natural language processing arc integrated. Currently we are studying automatic speech translation from Japanese into English in the domain of dialogues with the re ception service of an international conference office. In this framework we are constructing syntactic rules for recognition of Japanese speech.In speech recognition, the most significant concern is raising the recognition accuracy. For that purpose, applying linguistic information turns out to be promising. Various approaches have been taken, such as using stochastic models [2] , syntactic rules [3] , semantic information [4] and discourse plans [5] . Among stochastic models, the bigram and trigram succeeded in achieving a high recognition accuracy in languages that have a strong tendency toward a standard word order, such as English. On the contrary, Japanese belongs to free word order languages [6] . For such a language, semantic information is more adequate a.s a constraint. However, building semantic constraints for a large vocabulary needs a tremendous amount of data.Currently, our data consist of dialogues between the conference registration office and prospective conference participants with approximately 199,000 words in telephone conversations and approximately 72,000 words in keyboard conversations. But our data are still not sufficient to build appropriate semantic constraints for sentences with 700 distinct words. Processing a discourse plan requires excessive calculation and the study of discourse itself must be further developed to be applicable to speech recognition. On the other hand, syntax has been studied in more detail and makes increasing the vocabulary easier.As we are working on spoken language, we try to reflect real language usage. For this purpose, a stochastic approach beyond trigrams, namely stochastic sentence parsing [7] , seems most promising. Ideally, syntactic rules should be generated automatically from a large dialogue corpus and probabilities should also be automatically assigned to each node. But to do so, we need underlying rules. Moreover, coping with phoneme perplexity, which is crucial to speech recognition, with rules created frmn a dialogue corpus, requires additional research [8] .In this paper we propose taking into account tile weaknesses of the speech recogniton system in the earliest stage, namely when we construct underlying syntactic rules. First, we examined the speech recognition results to determine which Syntactic categories tend to be recognized erroneously. Second, we utilized our dialogue corpus [9] to support the refinement of rules concerning those categories. As examples, we discuss formal nouns 1 and conjunctive postposi~ions 2. Finally, we carried out a speech recognition experiment with the refined rules to verify the validity of our approach. | 0 |
Language, and therefore data derived from language, changes over time (Ullmann, 1962) . Word senses can shift over long periods of time (Wilkins, 1993; Wijaya and Yeniterzi, 2011; Hamilton et al., 2016) , and written language can change rapidly in online platforms (Eisenstein et al., 2014; Goel et al., 2016) . However, little is known about how shifts in text over time affect the performance of language processing systems.This paper focuses on a standard text processing task, document classification, to provide insight into how classification performance varies with time. We consider both long-term variations in text over time and seasonal variations which change throughout a year but repeat across years. Our empirical study considers corpora contain-ing formal text spanning decades as well as usergenerated content spanning only a few years.After describing the datasets and experiment design, this paper has two main sections, respectively addressing the following research questions:1. In what ways does document classification depend on the timestamps of the documents?2. Can document classifiers be adapted to perform better in time-varying corpora?To address question 1, we train and test on data from different time periods, to understand how performance varies with time. To address question 2, we apply a domain adaptation approach, treating time intervals as domains. We show that in most cases this approach can lead to improvements in classification performance, even on future time intervals.Time is implicitly embedded in the classification process: classifiers are often built to be applied to future data that doesn't yet exist, and performance on held-out data is measured to estimate performance on future data whose distribution may have changed. Methods exist to adjust for changes in the data distribution (covariate shift) (Shimodaira, 2000; Bickel et al., 2009) , but time is not typically incorporated into such methods explicitly.One line of work that explicitly studies the relationship between time and the distribution of data is work on classifying the time period in which a document was written (document dating) (Kanhabua and Nørvåg, 2008; Chambers, 2012; Kotsakos et al., 2014 ). However, this task is directed differently from our work: predicting timestamps given documents, rather than predicting information about documents given timestamps.Time intervals (non-seasonal) Time intervals (seasonal) Size Reviews (music) 1997 -99, 2000 -02, 2003 -05, 2006 -08, 2009 -11, 2012 -14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 653K Reviews (hotels) 2005 -08, 2009 -11, 2012 -14, 2015 -17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 78.6K Reviews (restaurants) 2005 -08, 2009 -11, 2012 -14, 2015 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 1.16M News (economy) 1950 -70, 1971 -85, 1986 -2000 , 2001 -14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 6.29K Politics (platforms) 1948 -56, 1960 -68, 1972 -80, 1984 -92, 1996 -2004 , 2008 2013, 2014, 2015, 2016 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 9.83K Table 1 : Descriptions of corpora spanning multiple time intervals. Size is the number of documents. | 0 |
Treebank-based statistical parsers tend to achieve greater coverage and robustness compared to approaches using handcrafted grammars. However, they are criticised for being too shallow to mark important syntactic and semantic dependencies needed for meaning-sensitive applications (Kaplan, 2004) . To treat this deficiency, a number of researchers have concentrated on enriching shallow parsers with deep dependency information. (Cahill and al., 2002) , outlined an approach which exploits information encoded in the Penn-II Treebank (PTB) trees to automatically annotate each node in each tree with LFG f-structure equations representing deep predicate-argument structure relations. From this LFG annotated treebank, large-scale unification grammar resources were automatically extracted and used in parsing (Cahill and al., 2008) and generation (Cahill and van Genabith, 2006) . This approach was subsequently extended to other languages including German (Cahill and al., 2003) , Chinese , (Guo and al., 2007) , Spanish (O'Donovan, 2004) , (Chrupala and van Genabith, 2006) and French (Schluter and van Genabith, 2008) .Arabic is a semitic language and is well-known for its morphological richness and syntactic complexity. In this paper we describe the porting of the LFG annotation methodology to Arabic in order to induce LFG f-structures from the Penn Arabic Treebank (ATB) (Bies, 2003) , (Maamouri and Bies, 2004) . We evaluate both the coverage and quality of the automatic f-structure annotation of the ATB. Ultimately, our goal is to use the fstructure annotated ATB to derive wide-coverage resources for parsing and generating unrestricted Arabic text. In this paper we concentrate on the annotation algorithm. The paper first provides a brief overview of Lexical Functional Grammar, and the Penn Arabic Treebank (ATB). The next section presents the architecture of the f-structure annotation algorithm for the acquisition of f-structures from the Arabic treebank. The last section provides an evaluation of the quality and coverage of the annotation algorithm.Lexical-Functional Grammar (LFG) (Kaplan and Bresnan, 1982) ; (Bresnan, 2001) , (Falk, 2001) 2001, (Sells, 1985) is a constraint-based theory of grammar. LFG rejects concepts of configurationality and movement familiar from generative grammar, and provides a non-derivational alternative of parallel structures of which phrase structure trees are only one component. LFG involves two basic, parallel forms of knowledge representation: c(onstituent)-structure, which is represented by (f-structure annotated) phrase structure trees; and f(unctional)-structure, represented by a matrix of attribute-value pairs. While c-structure accounts for language-specific lexical idiosyncrasies, syntactic surface configurations and word order variations, f-structure provides a more abstract level of representation (grammatical functions/ labeled dependencies), abstracting from some cross-linguistic syntactic differences. Languages may differ typologically as regards surface structural representations, but may still encode similar syntactic functions (such as, subject, object, adjunct, etc.) . For a recent overview on LFG-based analyses of Arabic see (Attia, 2008) who presents a hand-crafted Arabic LFG parser using the XLE (Xerox Linguistics Environment).The Penn Arabic Treebank project started in 2001 with the aim of describing written Modern Standard Arabic newswire. The Treebank consists of 23611 sentences (Bies, 2003) , (Maamouri and Bies, 2004) . Arabic is a subject pro-drop language: a null category (pro) is allowed in the subject position of a finite clause if the agreement features on the verb are rich enough to enable content to be recovered (Baptista, 1995) , (Chomsky, 1981) . This is represented in the ATB annotation by an empty node after the verb marked with a -SBJ functional tag. The ATB annotation, following the Penn-II Treebank, utilises the concept of empty nodes and traces to mark long distance dependencies, as in relative clauses and questions. The default word order in Arabic is VSO. When the subject precedes the verb (SVO), the construction is considered as topicalized. Modern Standard Arabic also allows VOS word order under certain conditions, e.g. when the object is a pronoun. The ATB annotation scheme involves 24 basic POS-tags (497 different tags with morphological information ), 22 phrasal tags, and 20 individual functional tags (52 different combined tags). The relatively free word order of Arabic means that phrase structural position is not an indicator of grammatical function, a feature of English which was heavily exploited in the automatic LFG annotation of the Penn-II Treebank (Cahill and al., 2002) . Instead, in the ATB functional tags are used to mark the subject as well as the object. The syntactic annotation style of the ATB follows, as much as possible, the methodologies and bracketing guidelines already used for the English Penn-II Treebank. For example, in the Penn English Treebank (PTB) (Marcus, 1994) , small clauses are considered sentences composed of a subject and a predicate, without traces for an omitted verb or any sort of control relationship, as in example (1) for the sentence "I consider Kris a fool".(1) (S (NP-SBJ I) (VP consider (S (NP-SBJ Kris) (NP-PRD a fool))))The team working on the ATB found this approach very convenient for copula constructions in Arabic, which are mainly verbless (Maamouri and Bies, 2004) . Therefore they used a similar analysis without assuming a deleted copula verb or control relationship, as in (2).(2) (S (NP-SBJ Al-mas>alatuéË eÖÏ d) (ADJ-PRD basiyTatuN é¢t ) F )) é¢t ) F éË eÖÏ dAl-mas>alatu basiyTatuN the-question simple 'The question is simple.' | 0 |
In psycholinguistics, priming refers to the fact that speakers prefer to reuse recently encountered linguistic material. Priming effects typically manifest themselves in shorter processing times or higher usage frequencies for reused material compared to non-reused material. These effects are attested both in language comprehension and in language production. Structural priming occurs when a speaker repeats a syntactic decision, and has been demonstrated in numerous experiments over the past two decades (e.g., Bock, 1986; Branigan et al., 2000) . These experimental findings show that subjects are more likely to choose, e.g., a passive voice construction if they have previously comprehended or produced such a construction.Recent studies have used syntactically annotated corpora to investigate structural priming. The results have demonstrated the existence of priming effects in corpus data: they occur for specific syntactic constructions (Gries, 2005; Szm-recsanyi, 2005) , consistent with the experimental literature, but also generalize to syntactic rules across the board, which repeated more often than expected by chance (Reitter et al., 2006b; Dubey et al., 2006) . In the present paper, we build on this corpus-based approach to priming, but focus on the role of the underlying syntactic representations. In particular, we use priming to evaluate claims resulting from a particular syntactic theory, which is a way of testing the representational assumptions it makes.Using priming effects to inform syntactic theory is a novel idea; previous corpus-based priming studies have simply worked with uncontroversial classes of constructions (e.g., passive/active). The contribution of this paper is to overcome this limitation by defining a computational model of priming with a clear interface to a particular syntactic framework. The general assumption we make is that priming is a phenomenon relating to grammatical constituents -these constituents determine the syntactic choices whose repetition can lead to priming. Crucially, grammatical frameworks differ in the grammatical constituents they assume, and therefore predict different sets of priming effects.We require the following ingredients to pursue this approach: a syntactic theory that identifies a set of constituents, a corpus of linguistic data annotated according to that syntactic theory, and a statistical model that estimates the strength of priming based on a set of external factors. We can then derive predictions for the influence of these factors from the syntactic theory, and test them using the statistical model. In this paper, we use regression models to quantify structural priming effects and to verify predictions made by Combinatory Categorial Grammar (CCG, Steedman (2000) ), a syntactic framework that has the theoretical potential to elegantly explain some of the phenomena discovered in priming experiments.CCG is distinguished from most other grammatical theories by the fact that its rules are type-dependent, rather than structure-dependent like classical transformations. Such rules adhere strictly to the constituent condition on rules, i.e., they apply to and yield constituents. Moreover, the syntactic types that determine the applicability of rules in derivations are transparent to (i.e., are determined, though not necessarily uniquely, by) the semantic types that they are associated with. As a consequence, syntactic types are more expressive and more numerous than standard parts of speech: there are around 500 highly frequent CCG types, against the standard 50 or so Penn Treebank POS tags. As we will see below, these properties allow CCG to discard a number of traditional assumptions concerning surface constituency. They also allow us to make a number of testable predictions concerning priming effects, most importantly (a) that priming effects are type-driven and independent of derivation, and, as a corollary; (b) that lexical and derived constituents of the same type can prime each other. These effects are not expected under more traditional views of priming as structure-dependent.This paper is organized as follows: Section 2 explains the relationship between structural priming and CCG, which leads to a set of specific predictions, detailed in Section 3. Sections 4 and 5 present the methodology employed to test these predictions, describing the corpus data and the statistical analysis used. Section 6 then presents the results of three experiments that deal with priming of lexical vs. phrasal categories, priming in incremental vs. normal form derivations, and frequency effects in priming. Section 7 provides a discussion of the implications of these findings. | 0 |
Name entity recognition is an important subtask in natural language processing (NLP). The results of recognition and classification of proper nouns in a text document are widely used in information retrieval, information extraction, machine translation, question answering and automatic summarization (Nadeau and Sekine. 2007; Kaur and Gupta. 2010) . Depending on the requirements of specific tasks, the types to be recognized can be person, location, organization and date, which are mostly used in newswire (Tjong et al., 2003) , or other commonly used measures (percent, weight, money), email address, etc. It can also be domain specific entity types such as medical drug names, disease symptoms and treatment, etc. (Asma Ben Abacha and Pierre Zweigenbaum, 2001) .Name entity recognition is a challenging task which needs massive prior knowledge sources for better performance (Lev Ratinov, Dan Roth, 2009; Nadeau and Sekine. 2007) . Many researches works have been conducted in different domains with various approaches. Early studies focus on heuristic and handcrafted rules. By defining the formation patterns and context over lexical-syntactic features and term constituents, entities are recognized by matching the patterns against the input documents (Rau, Lisa F. 1991; Collins, Michael, Singer, Y. 1999) . Rule-based system may achieve high degree of precision. However, the development process is time-consuming and porting these developed rules from one domain to another is a major challenge. Recent research in NER tends to use machine learning approaches (Andrew Borthwick. 1999; McCallum, Andrew and Li, W. 2003; Takeuchi K. and Collier N. 2002) . The learning methods include various supervised, semi-supervised and unsupervised learning. The supervised learning tends to be the dominant technique for named entity recognition and classification (David Nadeau and Satoshi Sekine. 2007) . However, supervised machine learning methods require large amount of annotated documents for model training and its performance typically depends on the availability of sufficient high quality training data in the domain of interest. There are some systems which use hybrid methods to combine different rule-based and/or machine learning systems for improved performance over individual approaches (Srihari R. et al., 2000; Tim R. et al., 2012) . Hybrid systems make the best use of the good features of different systems or methods to achieve the best overall performance. In this paper, we first select several publicly available and well-established NER tools in section 2. Then all the tools are validated in section 3 with CONLL 2003 metrics and a customized partial matching measurement. Then we constructed a hybrid NER system based on the best performed NER tools in section 4. | 0 |
In this paper, we describe the system that we develop as part of our participation in the Workshop on Asian Translation (WAT) 2016 (Nakazawa et al., 2016) for English-Hindi language pair. This year English-Hindi language pair is adopted for translation task for the first time in WAT. Apart from that, the said language pair was introduced in WMT 14 (Bojar et al., 2014) . Our system is based on Statistical Machine Translation (SMT) approach. The shared task organizers provide English-Hindi parallel corpus for training and tuning and monolingual corpus for building language model. Literature shows that there exists many SMT based appraoches for differnt language pairs and domains. Linguistic-knowledge independent techniques such as phrase-based SMT (Koehn et al., 2003) and hierarchical phrase-based SMT (Chiang, 2005; Chiang, 2007) manage to perform efficiently as long as sufficient parallel text are available. Our submitted system is based on hierarchical SMT, performance of which is improved by performing reordering in the source side and augmenting English-Hindi bilingual dictionary.The rest of the paper is organized as follows. Section 2 describes the various methods that we use. Section 3 presents the details of datasets, experimental setup, results and analysis. Finally, Section 4 concludes the paper. | 0 |
In this age of growing e-commerce markets, reviews are taken very seriously, however, manually writing these reviews has become an extremely laborious task. This leads us to work on systems which can automatically generate realistic looking reviews which can be automatically customized to the user writing it, the product being reviewed and the desired rating the generated review should express. This makes the reviewing process much easier which can potentially increase the number of reviews posted leading to a more informed choice for potential buyers.Natural Language Generation has always been one of the most challenging task in the field of natural language processing. Most of the present day approaches very loosely constraint the generation process often leading to ill formed or meaningless generations. Ensuring semantic and syntactic coherence across the generated sentence is also an immensely challenging task. We explore enforcing additional constraints on the generation process which we hope will restrict the generation manifold and generate more meaningful and semantically consistent sentence also adhering to the desired ratings. In this paper we attempt to perform the following tasks:• We implement an automatic review generator using Long Short Term Memory Networks (LSTM) (Hochreiter and Schmidhuber, 1997) , which has proved useful in remembering context and modelling sentence syntax. We also incorporate a soft attention mechanism which helps the model to attend better to the relevant context and generate better reviews. Such a review generator system caters to each individual users reviewing style and would convert a user provided rating into a review personalized to the users writing style and based on their rating.• Sentiment Analysis from reviews. This includes going through the reviews and trying to gauge user sentiment and assign a score based on it. Score parameters have been found to be much easier to go through and base ones decisions upon rather than manually going through hundreds of reviews.• In this paper we propose an additional cyclic consistency loss term which allows for joint training of the generation network with the sentiment analysis network. This improves the generator network which is now more constrained and is forced to generate reviews which adhere to the provided rating.• The use of 'soft' generation instead of a sampling based generation allows end to end gradient propagation allowing us to train our models end to end. | 0 |
The role of inference as it relates to natural language (NL) semantics has oft been neglected. Recently, there has been a move away by some NL semanticists from the heavy machinery of, say, Montagovian-style semantics to a more proof-based approach. This represents a belief that the notion of derivability plays as central a role in NL semantics as that of entailment. Beginning with van Benthem (1986) , and continuing on with Valencia (1991) , Dowty (1994) , Gilad and Francez (2005) , Moss (2008) , Moss (2010 ), van Benthem (2008 , Moss (2009) and Moss (2012) among others, the study of various natural logics has become commonplace.Natural logicians place an emphasis on the development and study of proof theories that capture the sort of inferences speakers of a particular NL like English make in practice. It should be said, though, that 'natural logic' is a catchall term that refers to either the study of various monotonicity calculià la van Benthem or Aristotelean-style syllogistic fragmentsà la Moss. Although researchers tend to study each type of system independently, MacCartney (2009) and MacCartney and Manning (2009) (henceforth M&M) recently developed an algorithmic approach to natural logic that attempts to combine insights from both monotonicity calculi and various syllogistic fragments to derive compositionally the relation between two NL sentences from the relations of their parts.At the heart of their system, M&M begin with seven intuitive lexicalsemantic relations that NL expressions can stand in, e.g., synonymy and antonymy, and then ask the question: if ' stands in some lexicalsemantic relation to ; and stands in (a possibly di↵erent) lexicalsemantic relation to #; what lexical-semantic relation (if any) can be concluded about the relation between ' and #? This type of reasoning has the familiar shape of a logical inference rule, a schema of which is given in (1):(1) 'R S# 'T # Drawing from their stock of lexical-semantic relations, for every instance of R and S, M&M reason semantically to calculate T , and present their results in what they call a join table. However, to my knowledge at least, the logical properties of their join table have not been explored in any real detail. The purpose of this paper is to give M&M's table a proper logical treatment. As I will show, the table has the underlying form of a syllogistic fragment and relies on a sort of generalized transitive reasoning. Here, I define a basic set-theoretic semantics and proof calculus for M&M's join table and prove a completeness theorem for it. | 0 |
Mathematical symbols and descriptions appear in various forms across document section boundaries without explicit markups, and mathematical symbols appear in the form of long texts. Thus, linking mathematical symbols and their descriptions is challenging.SemEval 2022 task 12: 'linking mathematical symbols to their descriptions (Lai et al., 2022a) ', is a relation extraction task targeted at scientific documents divided into two sub-tasks: sub-task A is a named entity recognition (NER) task that aims to predict the span of symbols and descriptions, and sub-task B is a relation extraction (RE) task that aims to predict relations between symbols and descriptions.Extracting these entities and relations is done to discover relational facts from unstructured texts. This problem can be decomposed into NER (Tjong Kim Sang and De Meulder, 2003; Ratinov and Roth, 2009) and RE (Zelenko et al., 2002; Bunescu and Mooney, 2005) . Early works employed a twostage relation extraction system, training one model to extract entities (Florian et al., 2004) and another model to classify relations between these entities (Zhou et al., 2005; Chan and Roth, 2011) . To reduce the error propagation of NER or better capture the interactions between NER and RE, joint models have been proposed as a promising approach that are based on an end-to-end method or on the setting of multi-task learning using shared representations (Wadden et al., 2019; Lin et al., 2020; Wang and Lu, 2020) .Recently, it has been observed that RE based on the shared encoder is suboptimal, but the use of separated encoders for NER and RE has shown improved performance compared to shared encoders, reexamining the effectiveness of the simple pipelined two-stage approach (Zhong and Chen, 2021; Ye et al., 2021) . From these results, we hypothesize that whereas separated encoders for NER and RE can learn customized representations useful for each task, joint models may include irrelevant information in the learned representation for NER or RE tasks, lowering the performance of the model. These results of using distinct encoders (Zhong and Chen, 2021) encourage us to adopt the aforementioned two-stage approach for NER and RE tasks, consisting of 1) MRC-based NER and 2) span pair classification for RE, as follows:1. MRC-based NER using a symbol tokenizer: Unlike the PURE system of (Zhong and Chen, 2021) that exploits the standard span-based NER of (Lee et al., 2017; Wadden et al., 2019) , our NER model is based on an MRC-based model (Li et al., 2020) , which treats NER asSciBERT [CLS]Tokens of entity type (i.e. 'symbol' or 'description' or 'ordered') Tokens of input sequence (e.g. 'let', '~', '$', '\\', 't', '$', '=', '\\', …) …Tok 1 Tok N[SEP] Tok 1(1) Start, End Candidates prediction(2) Span filteringFeed Forward Sigmoid … $ \\ t $ … … … Tok N $ ... \\ $ 1 ... ... ...Figure 1: Our NER model architecture based on MRC an MRC problem by providing an entity type as a question and using an MRC model to extract its entity mentions as answers. As in (Li et al., 2020) , an MRC model is based on two binary classification models: first, the position classifier predicts the start and end indexes to create a set of valid answer spans, second, the span classifier determines whether each of the valid spans is an answer. As a pretrained encoder for the NER model, SciBERT of (Beltagy et al., 2019) is used. Before presenting to SciBERT's tokenizer, we apply a rule-based symbol tokenizer to precisely predict the span boundary of mathematical symbols that appear in scientific documents, 2. Span pair classification for RE with solid markers: Similar to the PURE system of (Zhong and Chen, 2021), a pair of spans resulting from the NER model is given as an input but with solid markers, i.e., using a typed entity marker, as in the works of (Wu and He, 2019; Zhou and Chen, 2021 The remainder of this paper is organized as follows: Section 2 presents our system architecture in detail, Sections 3-5 describe the experimental setting, results, and ablation studies, and Section 6 contains our concluding remarks and future works. | 0 |
In this article, we propose a Word Sense Induction (WSI) task to capture the sense alternation of English dot types, as found in context. Dot type is the Generative Lexicon (GL) term to account for a noun that can denote at least two senses as a complex semantic class (Pustejovsky, 1995) . Consider the noun England in the following example from the American National Corpus (ANC) (Ide and Macleod, 2001) as an illustration.(1) (a) Manuel died in exile in 1932 in England. (b) England was being kept busy with other concerns. (c) England is conservative and rainy.In this example, (1a) shows the literal sense of England as a location, while (1b) demonstrates the metonymic sense of England as an organization. Dot types also allow for both senses to be simultaneously active in a predicate, as in example (1c).All proper names representative of geopolitical entities, for instance, demonstrate this type of classwide sense alternation, which is defined as regular polysemy (Apresjan, 1974) . Copestake (2013) emphasizes the relevance of distributional evidence in tasks regarding phenomena characteristic to regular polysemy, such as underspecification, because it incorporates frequency effects and is theory-neutral, requiring only that examples cluster in a way that mirrors their senses.Thus far, underspecification in dot types has been formalized in the linguistic theory of lexical semantics, but has not been explicitly studied using WSI. Kilgariff (1997) claims that word senses should be "construed as abstractions over clusters of word usages". Following this claim, our strategy employs WSI, which aims to automatically induce senses of words by clustering patterns found in a corpus (Lau et al., 2012; Jurgens, 2012) . In this way, we hypothesize that dot-type nominals will generate semantically more consistent (i.e. more homogeneous, cf. Section 5) groupings if clustered into more than two induced senses. This paper is organized as follows: we discuss related work (Section 2); elaborate upon our use of WSI and methodology employed (Section 3 and Section 4), as well as present results obtained; we discuss our results (Section 5) and conclude with final observations and future work (Sections 6 and 7). | 0 |
Puns represent a broad class of humorous word play. This paper focuses on two types of puns, homographic and heterographic.A homographic pun is characterized by an oscillation between two senses of a single word, each of which leads to a different but valid interpretation: I'd like to tell you a chemistry joke but I'm afraid of your reaction.Here the oscillation is between two senses of reaction. The first that comes to mind is perhaps that of a person revealing their true feelings about something (how they react), but then the relationship to chemistry emerges and the reader realizes that reaction can also mean the chemical sense, where substances change into others.Homographic puns can also be created via compounding:He had a collection of candy that was in mint condition.The pun relies on the oscillation between the flavor mint and the compound mint condition, where candy interacts with mint and mint condition interacts with collection.A heterographic pun relies on a different kind of oscillation, that is between two words that nearly sound alike, rhyme, or are nearly spelled the same.The best angle from which to solve a problem is the try angle.Here the oscillation is between try angle and triangle, where try suggests that the best way to solve a problem is to try harder, and triangle is (perhaps) the best kind of angle.This example illustrates one of the main challenges of heterographic puns, and that is identifying multi word expressions that are used as a kind of compound, but without being a standard or typical compound (like the very non-standard try angle). One reading treats try angle as a kind of misspelled version of triangle while the other treats them as two distinct words (try and angle). There is also a kind of oscillation between senses here, since try angle can waver back and forth between the geometric sense and the one of making effort.During our informal study of both heterographic and homographic puns, we observed a fairly clear pattern where a punned word will occur towards the end of a sentence and has a sense that is semantically related to an earlier word, and another sense that fits the immediate context in which it occurs. It often seemed that the sense that fits the immediate context is a more conventional usage (as in afraid of your reaction) and the more amusing sense is that which connects to an earlier word via some type of semantic relation (chemical reaction). This is more complicated in the case of heterographic puns since the punned word can rely on pronunciation or spelling to create the effect (i.e., try angle versus triangle). In this work we focused on exploiting these long distance semantic relations, although in future work we plan to consider the use of language models to identify more conventional usages.We used two versions of the WordNet SenseRelate word sense disambiguation algorithm 1 : Tar-getWord (Patwardhan et al., 2005) and AllWords (Pedersen and Kolhatkar, 2009) . Both have the goal of finding the assignment of senses in a context that maximizes their overall semantic relatedness (Patwardhan et al., 2003) according to measures in WordNet::Similarity 2 (Pedersen et al., 2004) .We relied on the Extended Gloss Overlaps measure (lesk) and the Gloss vector measure (vector) (Patwardhan and Pedersen, 2006) .The intuition behind a Lesk measure is that related words will be defined using some of the same words, and that recognizing these overlaps can serve as a means of identifying relationships between words (Lesk, 1986) . The Extended Gloss overlap measure (hereafter simply lesk) extends this idea by considering not only the definitions of the words themselves, but also concatenates the definitions of words that are directly related via hypernym, hyponym, and other relations according to WordNet.The Gloss Vector measure (hereafter simply vector) extends this idea by representing each word in a concatenated definition with a vector of co-occurring words, and then creating a representation of this definition by averaging together all of these vectors. The relatedness between two word senses can then be measured by finding the cosine between their respective vectors. | 0 |
All NLP models utilise a loss function as minimisation objective for model training. Choosing the most appropriate loss function for a particular task can play an important role in the performance of the trained models at test time and in-field. However, almost invariably the utilised loss function is the negative log-likelihood (NLL), also known as cross entropy. This is due to a number of attractive properties of the NLL such as its smoothness and differentiability in large regions of the parameter space. In addition, training with minimum NLL often leads to models of high empirical accuracy. However, this function is not exempt from shortcomings. To name two, 1) the NLL only rewards the probability of the ground-truth class and does not distinguish between the other classes, and 2) it does not impose explicit margins (or ratios) between the probability assigned to the ground-truth class and those assigned to the other classes. For this reason, other differentiable loss functions are regarded as appealing alternatives or complements to the NLL. Amongst them are the hinge loss (Cortes and Vapnik, 1995) and the REINFORCE loss (Williams, 1992; Ranzato et al., 2016) which both attempt to directly optimise the performance measure used to evaluate the model's accuracy (e.g., the Hamming loss, the CoNLL score, the BLEU score etc). Both these losses can be used for the usual classification at token level or for the joint classification of all the tokens in a sentence (i.e., structured prediction) (Tsochantaridis et al., 2005) . Given that targeting the evaluation loss during training may lead to improved performance at test time, in this short paper we explore the use of a structured hinge loss for named-entity recognition (NER). Our main contribution is the introduction of additional constraints between specific labelings aimed at increasing the accuracy of the learned model. Experimental results over a challenging NER dataset (OntoNotes 5.0, which is still far from accuracy saturation) show that the proposed approach has been able to achieve higher accuracy than both the NLL and a conventional structured hinge loss. | 0 |
Dependency-based syntactic parsing has become a widely used technique in natural language processing, and many different parsing models have been proposed in recent years (Yamada and Matsumoto, 2003; Nivre et al., 2004; McDonald et al., 2005a; Titov and Henderson, 2007; Martins et al., 2009) . One of the unresolved issues in this area is the proper treatment of non-projective dependency trees, which seem to be required for an adequate representation of predicate-argument structure, but which undermine the efficiency of dependency parsing (Neuhaus and Bröker, 1997; Buch-Kromann, 2006; McDonald and Satta, 2007) .Caught between the Scylla of linguistically inadequate projective trees and the Charybdis of computationally intractable non-projective trees, some researchers have sought a middle ground by exploring classes of mildly non-projective dependency structures that strike a better balance between expressivity and complexity (Nivre, 2006; Kuhlmann and Nivre, 2006; Kuhlmann and Möhl, 2007; Havelka, 2007) . Although these proposals seem to have a very good fit with linguistic data, in the sense that they often cover 99% or more of the structures found in existing treebanks, the development of efficient parsing algorithms for these classes has met with more limited success. For example, while both Kuhlmann and Satta (2009) and Gómez-Rodríguez et al. (2009) have shown how well-nested dependency trees with bounded gap degree can be parsed in polynomial time, the best time complexity for lexicalized parsing of this class remains a prohibitive O(n 7 ), which makes the practical usefulness questionable.In this paper, we explore another characterization of mildly non-projective dependency trees based on the notion of multiplanarity. This was originally proposed by Yli-Jyrä (2003) but has so far played a marginal role in the dependency parsing literature, because no algorithm was known for determining whether an arbitrary tree was mplanar, and no parsing algorithm existed for any constant value of m. The contribution of this paper is twofold. First, we present a procedure for determining the minimal number m such that a dependency tree is m-planar and use it to show that the overwhelming majority of sentences in dependency treebanks have a tree that is at most 2planar. Secondly, we present a transition-based parsing algorithm for 2-planar dependency trees, developed in two steps. We begin by showing how the stack-based algorithm of Nivre (2003) can be generalized from projective to planar structures. We then extend the system by adding a second stack and show that the resulting system captures exactly the set of 2-planar structures. Although the contributions of this paper are mainly theoretical, we also present an empirical evaluation of the 2planar parser, showing that it outperforms the projective parser on four data sets from the CoNLL-X shared task (Buchholz and Marsi, 2006) . | 0 |
Data-driven statistical spoken dialogue systems (SDS) (Lemon and Pietquin, 2012; Young et al., 2013) are a promising approach for realizing spoken dialogue interaction between humans and machines. Up until now, these systems have successfully been applied to single-or multi-domain taskoriented dialogues Lison, 2011; Wang et al., 2014; Papangelis and Stylianou, 2017; Peng et al., 2017) where each dialogue is modelled as multiple independent single-domain sub-dialogues. However, this multi-domain dialogue model (MDDM) does not offer an intuitive way of representing multiple objects of the same type (e.g., multiple restaurants) or dynamic relations between these objects. To the best of our knowledge, neither problem has yet been addressed in statistical SDS research.The goal of this paper is to propose a new dialogue model-the conversational entity dialogue model (CEDM)-which offers an intuitive way of modelling dialogues and complex dialogue structures inside the dialogue system. Inspired by Grosz (1978) , the CEDM is centred around objects and relations instead of domains thus offering a fundamental change in how we think about statistical dialogue modelling. The CEDM allows• to model dynamic relations directly, independently and persistently so that the relations may be addressed by the user and the system,• the system to talk about multiple objects of the same type, e.g., multiple restaurants, while still allowing feasible policy learning. The remainder of the paper is organized as follows: after presenting a brief motivation and related work in Section 2, Section 3 presents background information on statistical SDSs. Section 4 contains the main contribution and describes the conversational entity dialogue model in detail. Looking at one aspect of the CEDM, the modelling of relations, Section 5 describes a prototype implementation and shows the benefits of the CEDM in experiments with a simulated user. Section 6 concludes the paper with a list of open questions which need to be addressed in future work. | 0 |
To deal with automatic construction of translation lexicons, conventional research on machine translation (MT) [3] and cross-language information retrieval (CLIR) [1, 5, 7, 10, 13, 18] has generally used statistical techniques to automatically extract word translations from domain-specific parallel/comparable bilingual texts, such as bilingual newspapers [4, 11, 12, 20, 21] . However, only a certain set of their translations can be extracted through corpora with limited domains. In our research, we are interested in extracting translations of technical terms and proper names in diverse subjects, which are especially needed in performing CLIR services for Web users, e.g., "Hussein" (海珊/哈珊/侯賽因), "SARS" (嚴重急性呼吸道症候群). Existing CLIR systems usually rely on bilingual dictionaries for query translation [1, 13, 15] . Unfortunately, our analysis of Dreamer query log collected in Taiwan (see Section 3.1) showed that 74% of the 20,000 high frequent Web queries can not be found in general-purpose English-Chinese dictionaries (they are called unknown terms in this paper). How to automatically find translations for unknown terms, therefore, has become a major challenge for cross-language Web search.Different from previous works, we focus on investigating new approaches to mining multilingual Web resources [19] . We have proposed a novel approach to extracting translations of Web queries through the mining of Web anchor texts and link structures [16, 17] . An anchor text is the descriptive part of an out-link of a Web page used to provide a brief description of the linked page. A variety of anchor texts in multiple languages might link to the same pages from all over the world. For example, Figure 1 shows a typical example, in which there are a variety of anchor texts in multiple languages linking to the Yahoo! from all over the world. Such a bundle of anchor texts pointing together to the same page is called an anchor-text set. Web anchor-text sets may contain similar description texts in multiple languages. Thus, for an unknown term appearing in some anchor-text sets, it is likely that its corresponding target translations appear together in the same anchor-text sets.However, discovering translation knowledge from the Web has not been fully explored. In this paper, we intend to investigate another kind of Web resource, search results, and try to combine them with the anchor texts to benefit term translation. Chinese pages on the Web consist of rich texts in a mixture of Chinese (main language) and English (auxiliary language), and many of them contain translations of proper nouns. According to our observations, many search result pages in Chinese Web usually contain snippets of summaries in a mixture of Chinese and English. For example, Figure 2 illustrates the search-result page of the English query "National Palace Museum," which was submitted to Google for searching Chinese pages, could obtain many relevant results containing both the query itself and its Chinese aliases. To explore search results on extraction of term translation, we have employed two methods: the chi-square test and context-vector analysis.Based on a novel integration of the developed anchor-text-and search-result-based methods, we implemented an experimental system, called LiveTrans, to provide English-Chinese translation suggestion and cross-lingual retrieval of both Web pages and images. The purpose of this paper is to introduce our experiences in developing the methods and implementing the system. | 0 |
System combination aims to improve the translation quality by combining the outputs from multiple individual MT systems. The state-of-the-art system combination methodologies can be roughly categorized as follows (Karakos et al., 2010) :1. Confusion network based: confusion network is a form of lattice with the constraint that all paths need to pass through all nodes. An example of a confusion network is shown in Figure 1 .Here, the set of arcs between two consecutive nodes represents a bin, the number following a word is the count of this word in its bin, and each bin has the same size. The basic methodology of system combination based on confusion network includes the following steps: (a) Choose one system output as the "skeleton", which roughly decides the word order. (b) Align further system outputs to the skeleton, thus forming a confusion network. (c) Rescore the final confusion network using a language model, then pick the best path as the output of combination.A textual representation (where each line contains the words and counts of each bin) is usually the most convenient for machine processing.2. Joint optimization based: unlike building confusion network, this method considers all system outputs at once instead of incrementally. Then a log-linear model is used to derive costs, followed by a search algorithm to explore the combination space (Jayaraman et al., 2005; Heafield et al., 2009; He et al., 2009) .3. Hypothesis selection based: this method only includes algorithms that output one of the input translations, and no word selection from multiple systems is performed. Typical algorithms can be found in (Rosti et al., 2007) .This paper describes the JHU system combination submitted to the Sixth Workshop on Statistical Machine Translation (WMT-11) (http://statmt.org/wmt11/index.html ). The JHU system combination is confusion network based as described above, following the basic system combination framework described in (Karakos et al., 2008) . However, instead of ITG alignments that were used in (Karakos et al., 2008) , alignments based on TER-plus (Snover et al., 2009) were used now as the core system alignment algorithm.The rest of the paper is organized as follows: Section 2 introduces the application of TER-plus in system combination. Section 3 introduces the JHU system combination pipeline. Section 4 presents the combination results and concluding remarks appear in Section 5. | 0 |
Argumentation mining is a relatively new and active area of research in the natural language processing (NLP) community, focusing on extracting argument components (e.g., claims, premises) and determining the relationships (e.g., support, attack) between them. Recently, researchers have begun work on modeling an intriguing linguistic phenomenon, the persuasiveness of arguments.In this paper, we examine argument persuasiveness in the context of an under-investigated task in argument mining, argument persuasiveness scoring. Given a text consisting of an argument written for a particular topic, the goal of argument persuasiveness scoring is to assign a score to the text that indicates how persuasive the argument is. An argument persuasiveness scoring system can be used in a variety of situations. In an online debate, for instance, an author's primary goal is to convince others of the argument expressed in her comment(s). Similarly, in persuasive essay writing, an author should establish convincing arguments. In both situations, a persuasiveness scoring system could provide useful feedback to these authors on how persuasive their arguments are.Being a discourse-level task, argument persuasiveness scoring is potentially more challenging than many NLP tasks. Oftentimes, argument persuasiveness can only be determined by understanding the discourse, not by the presence or absence of lexical cues. As an example, consider the debate argument shown in Table 1 , which is composed of the author's assertion and her justification of the assertion written in response to a debate motion. It is fairly easy for a human to determine that this argument should be assigned a low persuasiveness score because the argument could be more clear. However, the same is not true for a machine, primarily because it is not possible to determine the persuasiveness of this argument merely by considering the words or phrases appearing in it.Given the difficulty of the task, it is conceivable that unsupervised argument persuasiveness scoring is very challenging. Nevertheless, a solution to unsupervised argument persuasiveness scoring is of practical significance. This is because of the high cost associated with manually creating persuasiveness-annotated data needed to train classifiers in a supervised manner. This contrasts with tasks such as polarity classification and stance classification. In these tasks, large amounts of annotated data can be harvested from the Web, as it is typical for a user to explicitly indicate her polarity/stance while writing her comments in a discussion/debate forum.We propose a lightly-supervised approach to argument persuasiveness scoring. To our knowledge, this is the first lightly-supervised approach to the task: virtually all previous work involving argument persuasiveness has adopted supervised approaches, training models with a large number of surface features that encode lexico-syntactic information. Note that learning from a large number of lexico-syntactic features is difficult, if not im-Motion This House would ban teachers from interacting with students via social networking websites.Acting as a warning signal for children at risk. Justification It is very difficult for a child to realize that he is being groomed; they are unlikely to know the risk. After all, a teacher is regarded as a trusted adult. But, if the child is aware that private electronic contact between teachers and students is prohibited by law, the child will immediately know the teacher is doing something he is not supposed to if he initiates private electronic contact. This will therefore act as an effective warning sign to the child and might prompt the child to tell a parent or another adult about what is going on. possible, when annotated data is scarce. Hence, we explore a different idea, addressing lightlysupervised argument persuasiveness scoring via an error-modeling approach. Specifically, guided by theoretical work on persuasiveness, we begin by defining a set of errors that could negatively impact an argument's persuasiveness. The key step, then, is to model an argument's errors: given an argument, we predict the presence and severity of the errors it possesses in an unsupervised manner by bootstrapping from a set of heuristically labeled seeds. Finally, we learn a persuasiveness predictor for each error-labeled argument from a small amount of persuasiveness-annotated data.Our contributions are two-fold. First, we propose the first lightly-supervised approach to persuasiveness scoring that rivals its supervised counterparts in performance on a new dataset consisting of 1,208 online debate arguments. Second, we make our annotated dataset publicly available. 1 Given the difficulty of obtaining annotated data for this task, we believe that our dataset will be a valuable resource to the NLP community. | 0 |
Reduplication is known to many computational morphologists as the remaining problem. Unlike concatenative morphology, which involves concatenation of different components to create a word, reduplication involves copying. Reduplication is therefore non-concatenative, and involves copying of either the whole word or part of the word. The reduplicated part of the word could be a prefix or part of the stem or even a suffix. This copying is what makes reduplication an outstanding problem. Depending on the language, reduplication may be used to show plurality, iterativity, intensification or completeness (Kimenyi, 2004) . Some of the notable examples of reduplication in computational morphology that have been reported include Kinande, Latin, Bambara (Roark and Sproat, 2007) ; Tagalog and Malay (Beesley and Karttunen, 2003; Antworth, 1990) . In these cases, one language may be exhibiting full stem reduplication while another may be exhibiting partial stem reduplication (Syllable).Reduplication may generally be divided into two: bounded and unbounded. Bounded reduplication is the kind that involves just repeating a given part of the word. Unbounded reduplication differs from bounded reduplication in that bounded reduplication involves copying of a fixed number of morphemes. Unbounded reduplication is considerably more challenging to deal with compared with bounded reduplication. Unbounded reduplication has received little attention from researchers no wonder it is yet to be fully solved (Roark and Sproat, 2007) . In principle, finite state methods are capable of handling bounded reduplication, and here some solutions have been proposed. In this paper we present our attempt to solve both bounded and unbounded reduplication in Kinyarwanda a typical Bantu language. Kinyarwanda is the national and official language of Rwanda. It is closely related to Kirundi the national language of Burundi. It is the mother tongue of about 20 million people living in the great lakes region of East and Central Africa. Kinyarwanda is a less privileged language characterised by lack of electronic resources and insignificant presence on the Internet. The language has an official orthography where tones, long vowels and consonants are not marked. Kinyarwanda is agglutinative in nature, with complex, mainly prefixing morphology. Verb forms may have slots of up to 20 af-fixes to be attached to the root on both sides: left and right. Reduplication is a common feature and generally all verbs undergo some form of reduplication. Adjectives and adverbs tend to undergo full word reduplication, as we shall see in section 2. | 0 |
Following the success of monolingual word embeddings (Collobert et al., 2011) , a number of studies have recently explored multilingual word embeddings. The goal is to learn word vectors such that similar words have similar vector representations regardless of their language (Zou et al., 2013; Upadhyay et al., 2016) . Multilingual word embeddings have applications in machine translation, and hold promise for cross-lingual model transfer in NLP tasks such as parsing or part-ofspeech tagging.A class of methods has emerged whose core technique is to learn linear maps between vector spaces of different languages (Mikolov et al., 2013a; Faruqui and Dyer, 2014; Vulic and Korhonen, 2016; Artetxe et al., 2016; Conneau et al., 2018) . These methods work as follows: For a given pair of languages, first, monolingual word vectors are learned independently for each language, and second, under the assumption thatx MxM (en) (de) x (en) (de) M x0 M xn xn 0Figure 1: Top: Assumption of linearity implies a single linear map M. Bottom: Our hypothesis is that the underlying map is expected to be nonlinear but in small enough neighborhoods can be approximated by linear maps M x i for each neighborhood defined by x i .word vector spaces exhibit comparable structure across languages, a linear mapping function is learned to connect the two monolingual vector spaces. The map can then be used to translate words between the language pair. Both seminal (Mikolov et al., 2013a) , and stateof-the-art methods (Conneau et al., 2018) found linear maps to substantially outperform specific non-linear maps generated by feedforward neural networks. Advantages of linear maps include: 1) In settings with limited training data, accurate linear maps can still be learned (Conneau et al., 2018; Zhang et al., 2017; Artetxe et al., 2017; Smith et al., 2017) . For example, in unsupervised learning, (Conneau et al., 2018) found that using non-linear mapping functions made adversarial training unstable 1 . 2) One can easily impose constraints on the linear maps at training time to ensure that the quality of the monolingual em-beddings is preserved after mapping (Xing et al., 2015; Smith et al., 2017) .However, it is not well understood to what extent the assumption of linearity holds and how it affects performance. In this paper, we investigate the behavior of word translation maps, and show that there is clear evidence of departure from linearity.Non-linear maps beyond those generated by feedforward neural networks have also been explored for this task (Lu et al., 2015; Shi et al., 2015; Wijaya et al., 2017; Shi et al., 2015) . However, no attempt was made to characterize the resulting maps.In this paper, we allow for an underlying mapping function that is non-linear, but assume that it can be approximated by linear maps at least in small enough neighborhoods. If the underlying map is linear, all local approximations should be identical, or, given the finite size of the training data, similar. In contrast, if the underlying map is non-linear, the locally linear approximations will depend on the neighborhood. Figure 1 illustrates the difference between the assumption of a single linear map, and our working hypothesis of locally linear approximations to a non-linear map. The variation of the linear approximations provides a characterization of the nonlinear map. We show that the local linear approximations vary across neighborhoods in the embedding space by an amount that is tightly correlated with the distance between the neighborhoods on which they are trained. The functional form of this variation can be used to test non-linear methods. | 0 |
Natural language processing systems are imperfect. Decades of research have yielded analyzers that mis-identify named entities, mis-attach syntactic relations, and mis-recognize noun phrase coreference anywhere from 10-40% of the time. But these systems are accurate enough so that their outputs can be used as soft, if noisy, indicators of language meaning for use in downstream analysis, such as systems that perform question answering, machine translation, event extraction, and narrative analysis (McCord et al., 2012; Gimpel and Smith, 2008; Miwa et al., 2010; Bamman et al., 2013) .To understand the performance of an analyzer, researchers and practitioners typically measure the accuracy of individual labels or edges among a single predicted output structure y, such as a most-probable tagging or entity clustering arg max y P (y|x) (conditional on text data x). But a probabilistic model gives a probability distribution over many other output structures that have smaller predicted probabilities; a line of work has sought to control cascading pipeline errors by passing on multiple structures from earlier stages of analysis, by propagating prediction uncertainty through multiple samples (Finkel et al., 2006) , K-best lists (Venugopal et al., 2008; Toutanova et al., 2008) , or explicitly diverse lists (Gimpel et al., 2013) ; often the goal is to marginalize over structures to calculate and minimize an expected loss function, as in minimum Bayes risk decoding (Goodman, 1996; Kumar and Byrne, 2004) , or to perform joint inference between early and later stages of NLP analysis (e.g. Singh et al., 2013; Durrett and Klein, 2014) .These approaches should work better when the posterior probabilities of the predicted linguistic structures reflect actual probabilities of the structures or aspects of the structures. For example, say a model is overconfident: it places too much probability mass in the top prediction, and not enough in the rest. Then there will be little benefit to using the lower probability structures, since in the training or inference objectives they will be incorrectly outweighed by the top prediction (or in a sampling approach, they will be systematically undersampled and thus have too-low frequencies). If we only evaluate models based on their top predictions or on downstream tasks, it is difficult to diagnose this issue.Instead, we propose to directly evaluate the calibration of a model's posterior prediction distribution. A perfectly calibrated model knows how often it's right or wrong; when it predicts an event with 80% confidence, the event empirically turns out to be true 80% of the time. While perfect accuracy for NLP models remains an unsolved challenge, perfect calibration is a more achievable goal, since a model that has imperfect accuracy could, in principle, be perfectly calibrated. In this paper, we develop a method to empirically analyze calibration that is appropriate for NLP models ( §3) and use it to analyze common generative and discriminative models for tagging and classification ( §4).Furthermore, if a model's probabilities are meaningful, that would justify using its probability distributions for any downstream purpose, including exploratory analysis on unlabeled data. In §6 we introduce a representative corpus exploration problem, identifying temporal event trends in international politics, with a method that is dependent on coreference resolution. We develop a coreference sampling algorithm ( §5.2) which projects uncertainty into the event extraction, inducing a posterior distribution over event frequencies. Sometimes the event trends have very high posterior variance (large confidence intervals), 2 reflecting when the NLP system genuinely does not know the correct semantic extraction. This highlights an important use of a calibrated model: being able to tell a user when the model's predictions are likely to be incorrect, or at least, not giving a user a false sense of certainty from an erroneous NLP analysis. | 0 |
Human judgments of translation quality play a vital role in the development of effective machine translation (MT) systems. Such judgments can be used to measure system quality in evaluations and to tune automatic metrics such as METEOR (Banerjee and Lavie, 2005) which act as stand-ins for human evaluators. However, collecting reliable human judgments often requires significant time commitments from expert annotators, leading to a general scarcity of judgments and a significant time lag when seeking judgments for new tasks or languages.Amazon's Mechanical Turk (MTurk) service facilitates inexpensive collection of large amounts of data from users around the world. However, Turkers are not trained to provide reliable annotations for natural language processing (NLP) tasks, and some Turkers attempt to game the system by submitting random answers. For these reasons, NLP tasks must be designed to be accessible to untrained users and data normalization techniques must be employed to ensure that the data collected is usable.This paper describes a MT evaluation task for translations of English into Arabic conducted using MTurk and compares several data normalization techniques. A novel 2-stage normalization technique is demonstrated to produce the highest agreement between Turkers and experts while retaining enough judgments to provide a robust tuning set for automatic evaluation metrics. | 0 |
Many tasks in natural language processing involve functions that assign scores-such as logprobabilities-to candidate strings or sequences. Often such a function can be represented compactly as a weighted finite state automaton (WFSA). Finding the best-scoring string according to a WFSA is straightforward using standard best-path algorithms.It is common to construct a scoring WFSA by combining two or more simpler WFSAs, taking advantage of the closure properties of WFSAs. For example, consider noisy channel approaches to speech recognition (Pereira and Riley, 1997) or machine translation (Knight and Al-Onaizan, 1998) . Given an input f , the score of a possible English transcription or translation e is the sum of its language model score log p(e) and its channel model score log p(f | e). If each of these functions of e is represented as a WFSA, then their sum is represented as the intersection of those two WFSAs.WFSA intersection corresponds to constraint conjunction, and hence is often a mathematically natural way to specify a solution to a problem involving multiple soft constraints on a desired string. Unfortunately, the intersection may be computationally inefficient in practice. The intersection of K WFSAs having n 1 , n 2 , . . . , n K states may have n 1 •n 2 • • • n K states in the worst case. 1 In this paper, we propose a more efficient method for finding the best path in an intersection without actually computing the full intersection. Our approach is based on dual decomposition, a combinatorial optimization technique that was recently introduced to the vision (Komodakis et al., 2007) and language processing communities Koo et al., 2010) . Our idea is to interrogate the several WFSAs separately, repeatedly visiting each WFSA to seek a high-scoring path in each WFSA that agrees with the current paths found in the other WSFAs. This iterative negotiation is reminiscent of message-passing algorithms (Sontag et al., 2008) , while the queries to the WFSAs are reminiscent of loss-augmented inference (Taskar et al., 2005) .We remark that a general solution whose asymptotic worst-case runtime beat that of naive intersection would have important implications for complexity theory (Karakostas et al., 2003) . Our approach is not such a solution. We have no worst-case bounds on how long dual decomposition will take to converge in our setting, and indeed it can fail to converge altogether. 2 However, when it does converge, we have a "certificate" that the solution is optimal.Dual decomposition is usually regarded as a method for finding an optimal vector in R d , subject to several constraints. However, it is not obvious how best to represent strings as vectors-they 1 Most regular expression operators combine WFSA sizes additively. It is primarily intersection and its close relative, composition, that do so multiplicatively, leading to inefficiency when two large WFSAs are combined, and to exponential blowup when many WFSAs are combined. Yet these operations are crucially important in practice.2 An example that oscillates can be constructed along lines similar to the one given by . have unbounded length, and furthermore the absolute position of a symbol is not usually significant in evaluating its contribution to the score. 3 One contribution of this work is that we propose a general, flexible scheme for converting strings to feature vectors on which the WFSAs must agree. In principle the number of features may be infinite, but the set of "active" features is expanded only as needed until the algorithm converges. Our experiments use a particular instantiation of our general scheme, based on n-gram features.We apply our method to a particular task: finding the Steiner consensus string (Gusfield, 1997 ) that has low total edit distance to a number of given, unaligned strings. As an illustration, we are pleased to report that "alia" and "aian" are the consensus popular names for girls and boys born in the U.S. in 2010. We use this technique for consensus decoding from speech recognition lattices, and to reconstruct the common source of up to 100 strings corrupted by random noise. Explicit intersection would be astronomically expensive in these cases. We demonstrate that our approach tends to converge rather quickly, and that it finds good solutions quickly in any case. | 0 |
The Prepositional Phrase (PP) attachment problem (Ratnaparkhi et al., 1994 ) is a classic ambiguity problem and is one of the main sources of errors for syntactic parsers (Kummerfeld et al., 2012) .Consider the examples in Figure 1 . For the first case, the correct attachment is the prepositional phrase attaching to the restaurant, the noun. Whereas, in the second case the attachment site is the verb went. While the attachments are ambiguous, the ambiguity is more severe when unseen or infrequent words like Hudson are encountered. Classical approaches for the task exploit a wide range of lexical, syntactic, and semantic features and make use of knowledge resources like Word-Net and VerbNet (Stetina and Nagao, 1997; Agirre et al., 2008; Zhao and Lin, 2004) . In recent years, word embeddings have become a very popular representation for lexical items (Mikolov et al., 2013; Pennington et al., 2014) . The idea is that the dimensions of a word embedding capture lexical, syntactic, and semantic features of words -in essence, the type of information that is exploited in PP attachment systems. Recent work in dependency parsing (Chen and Manning, 2014; suggests that these embeddings can also be useful to resolve PP attachment ambiguities. We follow this last line of research and investigate the use of word embeddings for PP attachment. Different from previous works, we consider several types of compositions for the vector embeddings corresponding to the words involved in a PP attachment decision. In particular, our model will define parameters over the tensor product of these embeddings. We control the capacity of the model by imposing low-rank constraints on the corresponding tensor which we formulate as a convex loss minimization.We conduct experiments on several datasets and settings and show that this relatively simple multi-linear model can give performances comparable (and in some cases, even superior) than more complex neural network models that use the same information. Our results suggest that for the PP attachment problem, exploring product spaces of dense word representations produces improvements in performance comparable to those obtained by incorporating non-linearities via a neural network.Our main contributions are: a) we present a simple multi-linear model that makes use of tensor products of word embeddings, capturing all possible conjunctions of latent embeddings; b) we conduct comprehensive experiments of different embeddings and composition operations for PP attachment and observe that syntax infused embeddings perform significantly better; c) our proposed simple multi-linear model that uses only word embeddings can outperform complex non-linear architectures that exploit similar information; d) for out-of-domain evaluation sets, we observe significant improvements by using word embeddings trained from the source and target domains. With these imrpovements, our tensor products outperform state-of-the art dependency parsers on PP attachment decisions. Ratnaparkhi et al. (1994) first proposed a formulation of PP attachment as a binary prediction problem. The task is as follows: we are given a fourway tuple v, o, p, m where v is a verb, o is a noun object, p is a preposition, and m is a modifier noun; the goal is to decide whether the prepositional phrase p, m attaches to the verb v or to the noun object o. | 0 |
Voice-driven interactions with computing devices are becoming increasingly prevalent. Amazon's Alexa, Apple's Siri, Microsoft's Cortana, and the Google Assistant are prominent examples. Google observed that mobile devices surpassed traditional computers in terms of search traffic (Sterling, 2015) , and that 20% of mobile searches are voice queries (Pichai, 2016) . As opposed to the keyword-based shorter queries received by webbased search engines, voice-enabled natural language processing (NLP) systems deal with longer, natural-language queries. This raises the question of how such data should be utilized for continuous improvements of the underlying methods.In this paper, we introduce SYNTAVIZ, a web interface for visualizing natural-language queries based on a syntax-driven ontology, thereby enabling its user to quickly gain insights on the statistical and structural properties of large-scale customer-generated data. We provide use cases of SYNTAVIZ on a dataset of 1 million unique voice queries issued by users of the Xfinity X1 entertainment platform of Comcast-one of the largest cable companies in the United States with approximately 22 million subscribers. We are planning to make the source code of SYNTAVIZ freely available as a contribution to the community. | 0 |
Recently, sentiment analysis has attracted a lot of attention from researchers. Most previous work attempted to detect overall sentiment polarity on a text span, such as document, paragraph and sentence. Since sentiments expressed in text always adhere to objects, it is much meaningful to identify the sentiment target and its orientation, which helps user gain precise sentiment insights on specific sentiment target.The aspect based sentiment analysis (ABSA) task (Task 4) (Pontiki et al., 2014) in SemEval 2014 is to extract aspect terms, determine its semantic category, and then to detect the sentiment orientation of the extracted aspect terms and its category. Specifically, it consists of 4 subtasks. The aspect term extraction (ATE) aims to extract the aspect terms from the sentences in two giv-en domains (laptop and restaurant). The aspect category detection (ACD) is to identify the semantic category of aspects in a predefined set of aspect categories (e.g., food, price). The aspect term polarity (ATP) classification is to determine whether the sentiment polarity of each aspect is positive, negative, neutral or conflict (i.e., both positive and negative). The aspect category polarity (ACP) classification is to determine the sentiment polarity of each aspect category. We participated in these four subtasks.Generally, there are three methods to extract aspect terms: unsupervised learning method based on word frequency ( (Ku et al., 2006) , (Long et al., 2010) ), supervised machine learning method (Kovelamudi et al., 2011) and semi-supervised learning method (Mukherjee and Liu, 2012) where only several user interested category seeds are given and used to extract more categorize aspect terms. Since sentiments always adhere to entities, several researchers worked on polarity classification of entity. For example, (Godbole et al., 2007) proposed a system that assigned scores representing positive or negative opinion to each distinct entity in the corpus. (Kim et al., 2013) presented a hierarchical aspect sentiment model to classify the polarity of aspect terms from unlabeled online reviews. Moreover, some sentiment lexicons, such as SentiWordNet (Baccianella et al., 2010) and M-PQA Subjectivity Lexicon (Wilson et al., 2009) , have been used to generate sentiment score features (Zhu et al., 2013) .The rest of this paper is organized as follows. From Section 2 to Section 5, we describe our approaches to the Aspect Term Extraction task, the Aspect Category detection task, the Aspect Term Polarity task and the Aspect Category Polarity task respectively. Section 6 provides the conclusion. | 0 |
Despite the recent advances in machine translation (MT) technology, MT systems are not able to provide ready-to-use translations in those contexts where translation accuracy is critical, such as medical or political applications, or even in contexts where correctness is demanded, such as hardware manuals or news texts. This has given rise to increasing research in computer assisted translation (CAT), where the focus is on how to provide a human translator with the best tools available in order to improve the human's efficiency. To this purpose, several ongoing FP7 projects were approved by the European Comission, some of them still being active. These projects pursue a very similar purpose, which is to develop a next generation CAT workbench.c 2014 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND.One of the most innovative research directions regarding CAT tools implies interactive translation prediction (ITP) (Barrachina et al., 2009) . Under this paradigm, system and human translator interact more closely than in a conventional postediting setup, and the ITP engine attempts to provide improved completions for the sentence being translated after each one of the interactions of the human translator. Ideally, a constrained decoding, forced to produce the part of the sentence which has already been validated, should be performed before providing every suggestion. However, a full decoding process gives way to an important problem in ITP: the system needs to be able to provide translation completions in real time, since only a small delay in response time could easily lead users to reject the system. For this purpose, a common approximation is to extract a wordgraph off-line, i.e., before the user is actually sitting in front of the CAT tool. Then, during the ITP procedure, suggestions are obtained by searching for the best path in such a graph.In the present work we report on different approaches analysed for the purpose of reducing the size of the wordgraph mentioned above when using a state-of-the-art ITP system. Since response time is critical, we studied three different strategies and measured the response time in a simulated ITP setup, alongside with an analysis of the degradation of the final translation quality obtained, both in terms of automatic MT metrics and in terms of simulated user effort.The rest of this paper is structured as follows: in the next section, we briefly review the principles of ITP as an evolution of the classical SMT formulation. Then, in Section 4, we review the theoretical grounds of the strategies studied. Next, Section 5 reports the experiments conducted to assess the quality of the pruned wordgraphs and the response time associated. Finally, Section 6 presents the conclusions of the present work. | 0 |
The probability of accurate information transmission is dependent on the perceptibility of difference between differently signifying forms, that is, contrast. The possible mechanisms by which contrast arises and is preserved in lexical forms, on the other hand, have been less clear. Many grammatical theories of the last century assume that the language faculty is constituted to directly optimize contrast in some way, e.g., (Martinet 1955) , (Flemming 1995) , and much computational work also operates within the assumption that contrast between units of form is maintained through some kind of direct monitoring and manipulation of contrast, e.g., (Lindblom 1986) , (de Boer 2000) . In all of these approaches, contrast is a property of forms. Here, I will present evidence within an exemplar model of lexical production and perception (Goldinger 2000) , (Pierrehumbert 2001) that the fact of a distinction between categories themselves, rather than the forms that instantiate them, can be indirectly responsible for driving contrast preservation through the statistics of assignment of forms to categories.Within exemplar models of linguistic category structure, the act of categorizing a percept does not strip that percept of all non-contrastive detail, e.g., (Johnson 1997) , (Pierrehumbert 2001 ). When we take into account evidence that production of an output of a lexical category may be based on details of previously perceived instances of that category (Goldinger 2000) , we see that a production-perception feedback loop is closed, in which details in what is perceived can be subsequently reflected in the details of what is produced (Pierrehumbert 2001) , (Oudeyer 2002 ). Whenever a system exhibits variation among elements, selection of variants over some criterion, and subsequent reproduction of selected elements, the system will evolve through natural selection on the basis of that criterion. Hence, any factors within the production-perception loop that bias the distribution of forms that are produced, the distribution of forms that are perceived, or the way that percepts are categorized, will result in evolution of category contents. Within the model presented here, lexical categories are populated by exemplars that have been previously categorized as correspondents of that category, and the output of a given category follows a distribution defined in part by the range of exemplars of that category, e.g., (Goldinger 2000) , (Pierrehumbert 2001 (Pierrehumbert , 2002 . Outputs are recognized as correspondents of a given category by comparison to exemplars already stored within that category (Pierrehumbert 2001) ; see also (Luce and Pisoni, 1998) . Because outputs of a category can be re-stored as new exemplars within that category within a community of speakers, any asymmetries in either the form of outputs, or the likelihood of recognition and storage of those outputs, will result in a shift in the contents of that category over time (Pierrehumbert 2001) , (Oudeyer 2002) , and (Wedel 2004) .Here, I show simulation results suggesting that contrast between distinct form-meaning pairings can arise indirectly from asymmetries in the consistency of categorization of more contrastive, versus less contrastive outputs 1 . Because more contrastive outputs make up a relatively greater proportion of the regularly stored exemplars in a given category than less contrastive outputs, they should have a proportionally greater influence on the evolution of that category. This asymmetry in the statistics of recognition and storage results in biased evolution of categories towards greater contrast.Within linguistics, the notion that contrast maintenance is an indirect effect of contrast's effect on a hearer/acquirer's categorization behavior has been suggested by Pierrehumbert (2002) , and by Gregory Guy (1996) on the basis of corpus data on preservation of morphological contrasts. Guy notes that data from production corpora will always underestimate the true extent of speakers' failure to produce a given meaningful contrast. For example, if a transcriber perceives the utterance 'I cook the chicken', in the absence of additional information s/he is likely to simply transcribe it as such, even if the speaker actually intended the sentence to be in past tense, but elided the [-t] past tense marker. Guy notes that language acquirers are no different from transcribers, such that the perception data from which a language learner develops a grammar will be biased towards the more contrastive utterances in the production data set. This steady selection of more contrastive forms in the categorized utterance set upon which acquisition is based should result in a tendency for grammatical processes to emerge that appear to function to preserve contrast, when they in fact only act to reproduce the patterns in the data set that the acquirer perceives.This mechanism for category separation through competition for category members is formally parallel to a proposed mechanism of sympatric speciation first proposed by Darwin (1859, chap. 4) and further developed in recent theoretical research on the effects of resource competition on the distribution of phenotypes in a population (Kondrashov and Kondrashov 1999) , (Dieckmann and Doebeli 1999 and references therein) . In this model of sympatric speciation (speciation in the absence of geographical separation), phenotypic divisions within a population and subsequent speciation can be driven by inequalities in the degree of competition experienced by individuals lying at different points on a distribution of phenotypes relating to resource exploitation. Individuals exhibiting intermediate phenotypes compete against a larger fraction of the population, while more extreme phenotypes have fewer competitors, and therefore greater individual access to resources. The higher fitness of individuals lying at the extremes of a phenotypic distribution can eventually produce a split in the population along this phenotypic dimension, setting the stage for subsequent speciation.The same statistical influence of resource competition on fitness has also been proposed to drive 'niche specialization' among separate species occupying overlapping niches (Schoener 1974) , (Dieckmann and Doebeli 1999) . For example, if two species that utilize an overlapping set of resources jointly colonize a new environment, they tend to evolve to specialize on different portions of the resource distribution. This is proposed to occur because phenotypic variants of each species that happen to focus on an extreme of the resource distribution experience less competition than those who prefer the center of the distribution.Within the exemplar based model proposed here for contrast maintenance, lexical categories are formally parallel to competing species undergoing selection for niche specialization. A category will be less often matched with a percept that is also close to another category than a percept that is close to no other category. Further, because the matching behavior of a category is determined by its contents, a category will evolve to be more specific for those percepts most often identified as members of that category. In this way, categories will tend to evolve to split the available percept space evenly, minimizing regions of overlap (see (Pierrehumbert 2002) for additional discussion of overlap minimization in evolving exemplar-based categories). | 0 |
Recently, document translation (or document-level translation) has received a growing interest in the fields of both human translation and Machine Translation (MT). For example, with the increasing demand in cross-lingual patent retrieval and filing patent applications in foreign countries, patent document translation has been recognized as one of the fundamental issues in patent processing and related applications (Fujii et al., 2008) .Unlike traditional sentence-level translation, document-level translation requires translators to hold a global view of the whole document rather than to focus on translating each source sentence individually. There are a number of critical issues in document-level translation (Catford, 1965) . One of them is the issue of translation consistency (Nida, 1964) . E.g. when we translate a term within a document, we prefer to keep the same translation throughout the whole document no matter how many times it is repeated. This is especially impor-tant in certain applications such as translation of legal documents and government documents. In some cases, consistency is even regarded as one of the primary quality measurements of translation (He et al., 2009) .However, directly modeling the translation problem on the whole document is a challenging issue due to its high complexity. It is even intractable to implement or run such a translation system in practice. To ease the problem, a general solution is to view the source text as a series of independent sentences and do translation using sentence-level SMT approaches. However, in this case, document contextsessential factors to document-level translationare generally overlooked either in training or inference (i.e. decoding in SMT) stage.In this paper, we address the issue of how to introduce document contexts into current SMT systems for document-level translation. In particular, we focus on translation consistency which is one of the most important issues in document-level MT. We propose a 3-step approach to incorporating document contexts into a traditional SMT system, and demonstrate that our approach can effectively reduce the errors caused by inconsistent translation. More interestingly, it is observed that using document contexts is promising for BLEU improvement. | 0 |
One of the core communicative functions of language is to modulate and reproduce social dynamics, such as friendship, familiarity, formality, and power (Hymes, 1972) . However, large-scale empirical work on understanding this communicative function has been stymied by a lack of labeled data: it is not clear what to annotate, let alone whether and how such annotations can be produced reliably. Computational linguistics has made great progress in modeling language's informational dimension, but -with a few notable exceptions -computation has had little to contribute to our understanding of language's social dimension.Yet there is a rich theoretical literature on social structures and dynamics. In this paper, we focus on one such structure: signed social networks, in which edges between individuals are annotated with information about the nature of the relationship. For example, the individuals in a dyad may be friends or foes; they may be on formal or informal terms; or they may be in an asymmetric power relationship. Several theories characterize signed social networks: in structural balance theory, edge signs indicate friendship and enmity, with some triads of signed edges being stable, and others being unstable (Cartwright and Harary, 1956) ; conversely, in status theory (Leskovec et al., 2010b) , edges indicate status differentials, and triads should obey transitivity. But these theoretical models can only be applied when the sign of each social network connection is known, and they do not answer the sociolinguistic question of how the sign of a social tie relates to the language that is exchanged across it.We present a unified statistical model that incorporates both network structure and linguistic content. The model connects signed social networks with address terms (Brown and Ford, 1961) , which include names, titles, and "placeholder names," such as dude. The choice of address terms is an indicator of the level of formality between the two parties: for example, in contemporary North American English, a formal relationship is signaled by the use of titles such as Ms and Mr, while an informal relationship is signaled by the use of first names and placeholder names. These tendencies can be captured with a multinomial distribution over address terms, conditioned on the nature of the relationship. However, the linguistic signal is not the only indicator of formality: network structural properties can also come into play. For example, if two individuals share a mutual friend, with which both are on informal terms, then they too are more likely to have an informal relationship. With a log-linear prior distribution over network structures, it is possible to incorporate such triadic features, which relate to structural balance and status theory.Given a dataset of unlabeled network structures and linguistic content, inference in this model simultaneously induces three quantities of interest:• a clustering of network edges into types;• a probabilistic model of the address terms that are used across each edge type, thus revealing the social meaning of these address terms;• weights for triadic features of signed networks, which can then be compared with the predictions of existing social theories.Such inferences can be viewed as a form of sociolinguistic structure induction, permitting social meanings to be drawn from linguistic data. In addition to the model and the associated inference procedure, we also present an approach for inducing a lexicon of address terms, and for tagging them in dialogues. We apply this procedure to a dataset of movie scripts (Danescu-Niculescu-Mizil and Lee, 2011). Quantitative evaluation against human ratings shows that the induced clusters of address terms correspond to intuitive perceptions of formality, and that the network structural features improve predictive likelihood over a purely text-based model. Qualitative evaluation shows that the model makes reasonable predictions of the level of formality of social network ties in well-known movies.We first describe our model for linking network structure and linguistic content in general terms, as it can be used for many types of linguistic content and edge labels. Next we describe a procedure which semi-automatically induces a lexicon of address terms, and then automatically labels them in text. We then describe the application of this proce-dure to a dataset of movie dialogues, including quantitative and qualitative evaluations. | 0 |
This paper describes a schema that enriches Abstract Meaning Representation (AMR) (Banarescu et al., 2013) to support Natural Language Understanding (NLU) in humanrobot dialogue systems. AMR is a formalism for sentence semantics that abstracts away many syntactic idiosyncrasies and represents sentences with rooted directed acyclic graphs ( Figure 1a shows the PENMAN notation of the graph). Although AMR provides a suitable level of abstraction for representing the content of sentences in our domain, it lacks a level of representation for speaker intent, which would capture the pragmatic effect of an utterance in dialogue.Pragmatic information is critical in dialogue with a conversational agent. For example, a request for information and a request for action serve distinct dialogue functions. Similarly, a promise regarding a future action and an assertion about a past action update the conversational context in very different ways. In our problem space, which involves a robot completing search and navigation tasks, the robot communicates about actions it can take in the environment such as moving, searching, and reporting back. While the robot is insensitive to many lexical differences, such as those between the movement terms go, move, and drive, it needs to understand specific instructions such as how far to go and when, as well as communicate and discuss the status of a given task. Additionally, it needs to understand if the illocutionary force of communications are commands, suggestions, or clarifications.To address these needs, we introduce a detailed and robust schema for representing illocutionary force in AMR called "Dialogue-AMR" (Figure 1b ). This expands and refines previous work which proposed basic modifications for how to annotate speech acts and tense and aspect information within AMR (Bonial et al., 2019a) . The contributions of the present research are: i) a set of speech acts finalized and situated in a taxonomy (Section 3.1); ii) the refinement of the Dialogue-AMR annotation schema to provide coverage of novel language (Sections 3.2 and 3.3); and iii) the creation of the "DialAMR" corpus, a collection of human-robot dialogues to which the new Dialogue-AMR schema has been applied (Section 4). 1 DialAMR has additionally been annotated with standard AMR, thus constituting one of the first corpora of dialogue annotated with AMR (see related work in Section 5) and allowing for comparison of both AMR schemas on the same data. Although some of the domain-specific extensions are tailored to our human-robot search and navigation application, the addi- Figure 2 : Planned NLU Pipeline-Verbal instructions are parsed into standard AMR using automated parsers, converted into Dialogue-AMR via graph-to-graph transformation, then, if executable, mapped to a robot behavior. The robot responds with questions or feedback.tion of illocutionary force to AMR is useful for many applications of human-agent conversation. Furthermore, the general paradigm of extending AMR is useful for applications which need to gloss over some linguistic distinctions while retaining others.A frequent dilemma in designing meaning representations for limited-domain dialogue systems is whether to preserve a general purpose representation that is adequate for capturing most language expressions, or whether to focus on only the small subset that the system will be able to deal with. The former can make the representations more complex, language interpretation more ambiguous, and systemspecific inference more difficult. The latter approach addresses these problems but may lose the ability to transfer to even a very similar domain and may require more in-domain data than is available for a new project. In order to try to maintain the advantages of each approach, we are leveraging DialAMR to develop an NLU pipeline (Figure 2) which contains both a general purpose representation language (Standard AMR) as well as Dialogue-AMR, which is more amenable to inferences that a robot needs to make when engaged in a collaborative navigation task. This pipeline converts automatically generated standard AMR graphs of the input language (Section 4.2.1) into Dialogue-AMR graphs augmented with tense, aspect, and speech act information (Section 4.2.2). | 0 |
Focusing subjuncts such as only, even, and also are a subclass of the sentence-element class of adverbials (Quirk et al., 1985) . They draw attention to a part of a sentence the focus of the focusing subjunct--which often represents 'new' information. Focusing subjuncts are usually realized by adverbs, but occasionally by prepositional phrases. Focusing subjuncts emphasize, approximate, or restrict their foci. They modify the force or truth value of a sentence, especially with respect to its applicability to the focused item (Quirk et al., 1985, §8.116 ).There are several reasons why developing any semantics for focusing subjuncts is a difficult task.First, focusing subjuncts are 'syntactically promiscuous'. They can adjoin to any maximal projection. They can occur at almost any position in a sentence.Second, focusing subjuncts are also 'semantically promiscuous'. They may focus (draw attention to) almost any constituent. They can precede or follow the item that they focus, and need not be adjacent to this item. The focus need only be contained somewhere within the syntactic sister of the focusing subjunct. Because of this behavior, it is difficult to determine the intended syntactic argument (adjunct) and focus of a focusing subjunct. Sentences *The work described in this paper was done at the University of Toronto. such as those in (1) can be ambiguous, even when uttered aloud with intonational effects. 1 11. John could also (SEE) his wife from the doorway (as well as being able to talk to her). 2. John could also see (his WIFE) from the doorway (as well as her brother). 3. John could also see his wife (from the DOORway) (as well as from further inside the room). 4. John could also (see his wife from the DOORway) (as well as being able to do other things).Third, the location of intonational stress has an important effect on the meaning of a sentence containing a focusing subjunct. Sentences may be partly disambiguated by intonational stress: interpretations in which stress falls outside the intended focus of the focusing subjunct are impossible. For example, the sentence (2) *John could also see (his wife) from the DOORway.is impossible on the indicated reading, since stress on door cannot confer focus on his wife. On the other hand, stress does not help to disambiguate between readings such as (1.3) and (1.4). Fourth, focusing subjuncts don't fit into the slotfiller semantics that seem adequate for handling many other sentence elements (see Section 1.3)~ At best, their semantic effect is to transform the semantic representation of the constituent they modify in some predictable compositional way (Hirst, 1987, p. 72) .Finally, focusing subjuncts carry pragmatic "baggage". The meaning of a focusing subjunct includes distinct asserted and non-asserted parts (Horn, 1969) , (Karttunen and Peters, 1979) . For example, 1 In the example sentences in this paper, small capitals denote intonational stress. Angle brackets 0 enclose the focus of a focusing subjunct and square brackets [ ] set off the constituent to which the focusing subjunct adjoins. Unacceptable sentences are preceded by an asterisk.(3) asserts (4.1) but only presupposes (4.2) (Horn, 1969) :(3) Only Muriel voted for Hubert. 41. No one other than Muriel voted for Hubert. 2. Muriel voted for Hubert.Analogously, (5) asserts (6.1) and presupposes (6.2) (Karttunen and Peters, 1979) : (5) Even Bill likes Mary. 61. Bill likes Mary. 2. Other people besides Bill like Mary; and of the people under consideration, Bill is the least likely to like Mary. The precise status of such pragmatic inferences is controversial. We take no stand here on this issue, or on the definition of "presupposition". We will simply say that, for example, (4.1) is due to the asserted meaning of only, and that (4.2) is produced by the non-asserted meaning of only.focusing subjuncts We desire a semantics for focusing subjuncts that is compositional (see Section 1.3), computationally practical, and amenable to a conventional, structured, near-first-order knowledge representation such as frames. It must cope with the semantic and syntactic problems of focusing subjuncts by being cross-categorial, being sensitive to intonation, and by distinguishing asserted and nonasserted meaning. By cross-categorial semantics we mean one that can cope with syntactic variability in the arguments of focusing subjuncts.We will demonstrate the following:• Intonation has an effect on meaning. A focus feature is useful to mediate between intonational information and meaning. • It is desirable to capture meaning in a multipart semantic representation. • An extended frame-based semantic representation can be used in place of higher-order logics to capture the meaning of focusing subjuncts.In this paper, we will use a compositionM, framebased approach to semantics. Focusing subjuncts have been thought difficult to fit into a compositional semantics because they change the meaning of their matrix sentences in ways that are not straightforward. A compositional semantics is characterized by the following properties:• Each word and well-formed syntactic phrase is represented by a distinct semantic object.• The semantic representation of a syntactic phrase is a systematic function of the representation of its constituent words and/or phrases.In a compositional semantics, the syntax drives the semantics. To each syntactic phrase construction rule there corresponds a semantic rule that specities how the semantic objects of the constituents are (systematically) combined or composed to obtain a semantic object for the phrase. Proponents of com-positionM semantics argue that natural language itself is for the most part compositional. In addition, using a composition semantics in semantic interpretation has numerous computational advantages.The particular incarnation of a compositional semantics that serves as the semantic framework for this work is the frame-based semantic representation of Hirst's Absity system (Hirst, 1987 (Hirst, , 1988 ). Absity's underlying representation of the world is a knowledge base consisting of frames. A frame is a collection of stereotypical knowledge about some topic or concept (Hirst, 1987, p. 12) . A frame is usuMly stored as a named structure having associated with it a set of slots or roles that may be assigned values or fillers. Absity's semantic objects belong to the types in a frame representation language called Frail (Charniak, 1981) . Absity uses the following types of semantic object:• a frame name• a slot name• a frame determiner • a slot-filler pair • a frame description (i.e. a frame with zero or more slot-filler pairs)• eiLher an instance or frame statement (atom or frame determiner with frame description) A frame determiner is a function that retrieves frames or adds them to the knowledge base. A frame description describes a frame in the knowledge base. The filler of a slot is either an atom, or it is an instance, specified by a frame statement, of a frame in the knowledge base. In order to capture the meaning of sentences containing focusing subjuncts, we will augment Absity's frame-representation language with two new semantic objects, to be described in Section 3.3.The notation Hirst uses for frames is illustrated in Figure 1 , which is a frame statement translation of the sentence (7) Ross washed the dog with a new shampoo.The semantics we will outline does not depend on any particular syntactic framework or theory. However, we choose to use Generalized Phrase Structure Grammar (GPSG) (Gazdar et al., 1985) , because this formalism uses a compositional semantics that (a ?u (wash ?u (agent=(the ?x (person ?X (propername--Ross)))) (patlent=(the ?y (dog ?y))) (instrument=(a ?z (shampoo ?z (age=new)))) )) (Montague, 1973) . A central notion of GPSG that we will make use of is that of the features of a syntactic phrase. A feature is a piece of linguistic information, such as tense, number, and bar level; it may be atom-valued or categoryvalued.The groundwork for the analysis of focusing subjuncts was laid by Horn (1969) . ttom describes only (when modifying an NP) as a predicate taking two arguments, "the term ix] within its scope" and "some proposition [Pz] containing that term" (Horn, 1969, p. 99) . The meaning of the predicate is then to presuppose that the proposition P is true of z, and to assert that x is the unique term of which P is true: -,(~y)(y # z & Py). Even takes the same arguments. It is said to presuppose (qy)(y # x & Py) and to assert Px. Horn requires a different formulation of the meaning of only when it modifies a VP. Since his formulation is flawed, we do not show it here. Jackendoff's (1972, p. 242) analysis of even and only employs a semantic marker F that is assumed to be present in surface structure and associated with a node containing stress. He calls the semantic material associated with constituents marked by F the focus of a sentence. Fie proposes a rule that states that even and "related words" are associated with focus by having the focus in their range. Differences between the ranges of various focusing adverbs account for their different distributions (Jackendoff, 1972, pp. 249-250) . For example:Range of even: If even is directly dominated by a node X, then X and all nodes dominated by X are in its range. Range of only: If only is directly dominated by a node X, then X and all nodes that are both dominated by X and to the right of only are in its range. That is, only cannot precede its focus (nor can just, which has the same range), but even can: John was the person least expected to).We will employ several aspects of Rooth's (1985) domain selection theory. A key feature of the theory is that only takes the VP adjacent to it in S-structure as its argument (an extension of the theory allows only to take arguments other than VPs). Rooth describes technical reasons for this arrangement (1985, p. 45) . Among these is the fact that focusing subjuncts can draw attention to two (or more) items that, syntactically, do not together constitute a well-formed phrase:(9) John only introduced (BILL) to (SUE).The prevailing linguistic theories allow a node (such as a focusing subjunct) only one argument in the syntactic or logical (function-argument) structures of a sentence.According to Rooth, the asserted meaning of (10) John only [vP introduced BILL to Sue].is "if John has a property of the form 'introduce y to Sue' then it is the property 'introduce Bill to Sue'" (Rooth, 1985, p. 44, p. 59 ). Rooth's theory would produce the same translation, shown in (11.2), for both sentence (10) and sentence (11.1).(11) 1. John only introduced Bill to SUE. | 0 |
More than one in three women and one in four men in the United States have experienced rape, physical violence, and/or stalking by an intimate partner (Black et al., 2011) . One in nine girls and one in 53 boys under the age of eighteen are sexually abused by an adult (Finkelhor et al., 2014) . Additionally, approximately one in ten elders in the USA have faced intimidation, isolation, neglect, and threats of violence. 1 Such interpersonal violence (IPV) 2 can lead to injury, depression, post-traumatic stress disorder, substance abuse, sexually transmitted diseases, as well as hospitalization, disability, or death (Black et al., 2011) . Most of the science on IPV is based on survey and interview data. However, the nature of IPV relationships can make people feel uncomfortable or unsafe when participating in such studies, leading to inaccurate results. Also, surveys can be costly and time-consuming to carry out (Schrading et al., 2015b) .Social media is an understudied source of IPV data. Over 79% of adults that frequent the internet utilize social media (Greenwood et al., 2016) . Online, individuals can anonymously share their experiences without fear of embarrassment or repercussions. Such narratives can also provide more details than surveys, and may lead to a deeper understanding of IPV. Nonetheless, it is extremely difficult to establish reference annotations useful for predictive modeling for discourse topics as emotionally charged as IPV.We meet these challenges with a combination of annotator labeling, analyzing annotations, applying semantic processing techniques (coreference resolution, semantic role labeling, sentiment analysis), developing classifiers, and studying physiological sensor measurements collected in real-time from annotators as they read and annotate texts. Our contributions include:1. Studying characteristics of the key players and their actions in IPV narratives.2. Using coreference chains as units that map human to automated annotations for analyzing semantic roles, predicates, and characteristics such as pronoun usage to affective tone.3. Applying distinct semantic features for classifying stakeholders, using coreference chains as classification units.Analyzing how annotators interpret emotional tone of texts vs. their own reactions to them, and discussing the link to annotators' measurement-based sensor data gathered as they labeled texts about abuse. | 0 |
This paper describes an approach for sharing resources in various grammar formalisms such as Feature-Based Lexicalized Tree Adjoin-ing Grammar (FB-LTAG 1 ) (Vijay-Shanker, 1987; Vijay-Shanker and Joshi, 1988) and Head-Driven Phrase Structure Grammar (HPSG) (Pollard and Sag, 1994 ) by a method of grammar conversion. The RenTAL system automatically converts an FB-LTAG grammar into a strongly equivalent HPSG-style grammar ). Strong equivalence means that both grammars generate exactly equivalent parse results, and that we can share the LTAG grammars and lexicons in HPSG applications. Our system can reduce considerable workload to develop a huge resource (grammars and lexicons) from scratch.Our concern is, however, not limited to the sharing of grammars and lexicons. Strongly equivalent grammars enable the sharing of ideas developed in each formalism.There have been many studies on parsing techniques (Poller and Becker, 1998; Flickinger et al., 2000) , ones on disambiguation models (Chiang, 2000; Kanayama et al., 2000) , and ones on programming/grammar-development environ- L T A G p a r s e r s H P S G p a r s e r s D e r i v a t i o n t r e e s P a r s e t r e e s D e r i v a t i o n t r a n s l a t o (Sarkar and Wintner, 1999; Doran et al., 2000; Makino et al., 1998) . These works are restricted to each closed community, and the relation between them is not well discussed. Investigating the relation will be apparently valuable for both communities.In this paper, we show that the strongly equivalent grammars enable the sharing of "parsing techniques", which are dependent on each computational framework and have never been shared among HPSG and LTAG communities. We apply our system to the latest version of the XTAG English grammar (The XTAG Research Group, 2001 ), which is a large-scale FB-LTAG grammar. A parsing experiment shows that an efficient HPSG parser with the obtained grammar achieved a significant speed-up against an existing LTAG parser ). This result implies that parsing techniques for HPSG are also beneficial for LTAG parsing. We can say that the grammar conversion enables us to share HPSG parsing techniques in LTAG parsing. Figure 1 depicts a brief sketch of the RenTAL system. The system consists of the following four modules: Tree converter, Type hierarchy extractor, Lexicon converter and Derivation translator. The tree converter module is a core module of the system, which is an implementation of the grammar conversion algorithm given in Section 3. The type hierarchy extractor module extracts the symbols of the node, features, and feature values from the LTAG elementary tree templates and lexicon, and construct the type hierarchy from them. The lexicon converter module converts LTAG elementary tree templates into HPSG lexical entries. The derivation translator module takes HPSG parse (Tateisi et al., 1998) . However, their method depended on translator's intuitive analysis of the original grammar. Thus the translation was manual and grammar dependent. The manual translation demanded considerable efforts from the translator, and obscures the equivalence between the original and obtained grammars. Other works (Kasper et al., 1995; Becker and Lopez, 2000) convert HPSG grammars into LTAG grammars. However, given the greater expressive power of HPSG, it is impossible to convert an arbitrary HPSG grammar into an LTAG grammar. Therefore, a conversion from HPSG into LTAG often requires some restrictions on the HPSG grammar to suppress its generative capacity. Thus, the conversion loses the equivalence of the grammars, and we cannot gain the above advantages.S N P V P V r u n V P V P VSection 2 reviews the source and the target grammar formalisms of the conversion algorithm. Section 3 describes the conversion algorithm which the core module in the RenTAL system uses. Section 4 presents the evaluation of the RenTAL system through experiments with the XTAG English grammar. Section 5 concludes this study and addresses future works. | 0 |
Crises and disasters frequently attract international attention to regions of the world that have previously been largely ignored by the international community. While it is possible to stock up on emergency relief supplies and, for the worst case, weapons, regardless of where exactly they are eventually going to be used, this cannot be done with multilingual information processing technology. This technology will often have to be developed after the fact in a quick response to the given situation. Multilingual data resources for statistical approaches, such as parallel corpora, may not always be available.In the fall of 2000, we decided to put the current state of the art to the test with respect to the rapid construction of a machine translation system from scratch.Within one month, we would hire translators; translate as much text as possible; and train a statistical MT system on the data thus created.The language of choice was Tamil, which is spoken in Sri Lanka and in the southern part of India. Tamil is a head-last language with a very rich morphology and therefore quite different from English. | 0 |
Despite recent motivation to utilize NLP for wider range of real world applications, most NLP papers, tasks and pipelines assume raw, clean texts. However, many texts we encounter in the wild, including a vast majority of legal documents (e.g., contracts and legal codes), are not so clean, with many of them being visually structured documents (VSDs) such as PDFs. For example, of 7.3 million text documents found in Panama Papers (which arguably approximates the distribution of data one would see in the wild), approximately 30% were PDFs 1 . Good preprocessing of VSDs is crucial in order to apply recent advances in NLP to real world applications.Thus far, the most micro and macro extremes of VSD preprocessing have been extensively studied, such as word segmentation and layout analysis (detecting figures, body texts, etc.; Soto and Yoo, 2019; Stahl et al., 2018) , respectively. While these two lines of studies allow extracting a sequence of words in the body of a document, neither of them accounts for local, logical structures such as paragraph boundaries and their hierarchies.These structures convey important information in any domain, but they are particulary important in the legal domain. For example, Figure 1 (1) shows raw text extracted from a non-disclosure agreement (NDA) in PDF format. An information extraction (IE) system must be aware of the hierarchical structure to successfully identify target information (e.g., extracting "definition of confidential information" requires understanding of hierarchy as in Figure 1 (2)). Furthermore, we must utilize the logical structures to remove debris that has slipped through layout analysis ("Page 1 of 5" in this case) and other structural artifacts (such as semicolons and section numbers) for a generic NLP pipeline to work properly.Yet, such logical structure analysis is difficult. Even the best PDF-to-text tool with a word-related error rate as low as 1.0% suffers from 17.0% newline detection error (Bast and Korzen, 2017) that is arguably the easiest form of logical structure analysis.The goal of this study is to develop a fine-grained logical structure analysis system for VSDs. We propose a transition parser-like formulation of logical structure analysis, where we predict a transition label between each consecutive pair of text fragments (e.g., two fragments are in a same paragraph, or in different paragraphs of different hierarchies). Based on such formulation, we developed a feature-based machine learning system that fuses multimodal cues: visual (such as indentation and line spacing), textual (such as section num- ..... 2. CONFIDENTIAL INFORMATION\n2.1. Confidential Information means all confidential information relating to the Purpose which the Disclosing\nParty or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates,\nbefore, on or after the Effective Date. This includes:\na) the terms of this Agreement;\nPage 1 of 5\nb) all confidential or proprietary information relating to: the business, customers, plans, pricing, operations,\ntechniques, know-how, technical information, design, trade secrets, whether in tangible or intangible form.\n2.2. Confidential Information does not include information which:\na) is or subsequently becomes public knowledge or publicly available through no fault of the Receiving Party; or … Disclosing Party: a Party to this Agreement which discloses its Confidential Information to the other Party; 2.1. Confidential Information means all confidential information relating to the Purpose which the Disclosing Confidential Information: has the meaning given in clause 2 of this Agreement;b) all confidential or proprietary information relating to: the business, customers, plans, pricing, operations, Party or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes:Page 1 of 5 a) the terms of this Agreement; techniques, know-how, technical information, design, trade secrets, whether in tangible or intangible form. bering and punctuation), and semantic (such as language model coherence) cues. Finally, we show that our system is easily customizable to different types of VSDs and that it significantly outperforms baselines in identifying different structures in VSDs. For example, our system obtained a paragraph boundary detection F1 score of 0.953 that is significantly better than PDFMiner 2 , a popular PDF-to-text tool, with an F1 score of 0.739. We open-sourced our system and dataset 3 .Party or any of its Affiliates, discloses or makes available, to the Receiving Party or any of its Affiliates, before, on or after the Effective Date. This includes:Page 1 of 5 a) the terms of this Agreement; | 0 |
Machine comprehension of text is the central goal in NLP. The academic community has proposed a variety of tasks, such as information extraction (Sarawagi, 2008) , semantic parsing (Mooney, 2007) and textual entailment (Androutsopoulos and Malakasiotis, 2010). However, these tasks assess performance on each task individually, rather than on overall progress towards machine comprehension of text.To this end, Richardson et al. (2013) proposed the Machine Comprehension Test (MCTest), a new challenge that aims at evaluating machine comprehension. It does so through an opendomain multiple-choice question answering task on fictional stories requiring the common sense reasoning typical of a 7-year-old child. It is easy to evaluate as it consists of multiple choice questions. Richardson et al. (2013) also showed how the creation of stories and questions can be crowdsourced efficiently, constructing two datasets for the task, namely MC160 and MC500. In addition, the authors presented a lexical matching baseline which is combined with the textual entailment recognition system BIUTEE (Stern and Dagan, 2011) .In this paper we develop an approach based on lexical matching which we extend by taking into account the type of the question and coreference resolution. These components improve the performance on questions that are difficult to handle with pure lexical matching. When combined with BIUTEE, we achieved 74.27% accuracy on MC160 and 65.96% on MC500, which are significantly better than those reported by Richardson et al. (2013) . Despite the simplicity of our approach, these results are comparable with the recent machine learning-based approaches proposed by Narasimhan and Barzilay (2015) , Wang et al. (2015) and Sachan et al. (2015) .Furthermore, we examine the types of questions and answers in the two datasets. We argue that some types are relatively simple to answer, partly due to the limited vocabulary used, which explains why simple lexical matching methods can perform well. On the other hand, some questions require understanding of higher level concepts such as those of the story and its characters, and/or require inference. This is still beyond the scope of current NLP systems. However, we believe our analysis will be useful in developing new methods and datasets for the task. To that extent, we will make our code and analysis publicly available. 1 | 0 |
Prosody refers to intonation, rhythm and lexical stress patterns of spoken language that convey linguistic and paralinguistic information such as emphasis, intent, attitude and emotion of a speaker. Prosodic information associated with a unit of speech, say, syllable, word, phrase or clause, influence all the segments of the unit in an utterance. In this sense they are also referred to as suprasegmentals (Lehiste, 1970) . Prosody in general is highly dependent on individual speaker style, gender, dialect and other phonological factors. The difficulty in reliably characterizing suprasegmental information present in speech has resulted in symbolic and parameteric prosody labeling standards like ToBI (Tones and Break Indices) (Silverman et al., 1992) and Tilt model (Taylor, 1998) respectively.Prosody in spoken language can be characterized through acoustic features or lexical features or both. Acoustic correlates of duration, intensity and pitch, like syllable nuclei duration, short time energy and fundamental frequency (f0) are some acoustic features that are perceived to confer prosodic prominence or stress in English. Lexical features like partsof-speech, syllable nuclei identity, syllable stress of neighboring words have also demonstrated high degree of discriminatory evidence in prosody detection tasks.The interplay between acoustic and lexical features in characterizing prosodic events has been successfully exploited in text-to-speech synthesis (Bulyko and Ostendorf, 2001; Ma et al., 2003) , speech recognition (Hasegawa-Johnson et al., 2005) and speech understanding (Wightman and Ostendorf, 1994) . Text-to-speech synthesis relies on lexical features derived predominantly from the input text to synthesize natural sounding speech with appropriate prosody. In contrast, output of a typical automatic speech recognition (ASR) system is noisy and hence, the acoustic features are more useful in predicting prosody than the hypothesized lexical transcript which may be erroneous. Speech understanding systems model both the lexical and acoustic features at the output of an ASR to improve natural language understanding. Another source of renewed interest has come from spoken language translation (Nöth et al., 2000; Agüero et al., 2006) . A prerequisite for all these applications is accurate prosody detection, the topic of the present work.In this paper, we describe our framework for building an automatic prosody labeler for English. We report results on the Boston University (BU) Radio Speech Corpus (Ostendorf et al., 1995) and Boston Directions Corpus (BDC) (Hirschberg and Nakatani, 1996) , two publicly available speech corpora with manual ToBI annotations intended for experiments in automatic prosody labeling. We condition prosody not only on word strings and their parts-of-speech but also on richer syntactic information encapsulated in the form of Supertags (Bangalore and Joshi, 1999) . We propose a maximum entropy modeling framework for the syntactic features. We model the acoustic-prosodic stream with two different models, a maximum entropy model and a more traditional hidden markov model (HMM). In an automatic prosody labeling task, one is essentially try-ing to predict the correct prosody label sequence for a given utterance and a maximum entropy model offers an elegant solution to this learning problem. The framework is also robust in the selection of discriminative features for the classification problem. So, given a word sequenceW = {w 1 , • • • , w n } and a set of acoustic-prosodic features A = {o 1 , • • • , o T }, the best prosodic label sequence L * = {l 1 , l 2 , • • • , l n } is obtained as follows, L * = arg max L P (L|A, W ) (1) = arg max L P (L|W ).P (A|L, W ) (2) ≈ arg max L P (L|Φ(W )).P (A|L, W ) (3)where Φ(W ) is the syntactic feature encoding of the word sequence W . The first term in Equation 3corresponds to the probability obtained through our maximum entropy syntactic model. The second term in Equation 3, computed by an HMM corresponds to the probability of the acoustic data stream which is assumed to be dependent only on the prosodic label sequence. The paper is organized as follows. In section 2 we describe related work in automatic prosody labeling followed by a description of the data used in our experiments in section 3. We present prosody prediction results from off-the-shelf synthesizers in section 4. Section 5 details our proposed maximum entropy syntactic-prosodic model for prosody labeling. In section 6, we describe our acoustic-prosodic model and discuss our results in section 7. We finally conclude in section 8 with directions for future work. | 0 |
Polarity shifters are content words that exhibit semantic properties similar to negation. For example, the negated statement in (1) can also be achieved by the verbal shifter fail instead of the negation not, as shown in (2).(1) Peter did not pass the exam.(2) Peter failed shifter to pass the exam.As with negation words, polarity shifters change the polarity of a statement. This can happen to both positive and negative statements. In (3) the positive polarity of scholarship is shifted by denied, resulting in a negative polarity for the phrase. Conversely, the overall polarity of (4) is positive despite the negative polarity of pain. Polarity shifting is also caused by other content word classes, such as nouns (e.g. downfall) and adjectives (e.g. devoid). However, this work focusses on verbs, due to their importance as minimal semantic units, far-reaching scopes and potential basis for nominal shifter lexicons (see §2.2.). Knowledge of polarity shifting is important for a variety of tasks, especially sentiment analysis (Wiegand et al., 2010; Liu, 2012; Wilson et al., 2005) , as well as relation extraction (Sanchez-Graillet and Poesio, 2007) and textual entailment recognition (Harabagiu et al., 2006) . The majority of research into polarity shifting for sentiment analysis has focussed on negation words (Wiegand et al., 2010; Schouten and Frasincar, 2016; Pak and Paroubek, 2010) . Negation words (e.g. not, no, never) are mostly function words, of which only a small number exists, so exhaustive coverage is comparatively simple. Content word classes, such as verbs, are considerably more difficult to cover comprehensively due to their sheer number. For example, WordNet (Miller et al., 1990) contains over 10k verbal lemmas. Most verbs are also far less frequent than common negation words, making individual verbal shifters seem less important. However, overall, verbal shifter lemmas occur 2.6 times as often as negation words (see §4.). Most existing resources on negation and polarity shifting cover few to no instances of verbal shifters (see §2.3.). To remedy this, we introduce a complete lexicon of verbal shifters with annotations of polarity shifters and their shifting scope for each word sense. Our contributions are as follows:(i) A complete lexicon of verbal polarity shifters, covering all verbs found in WordNet 3.1. (ii) A fine grained annotation, labelling every sense of a verb separately. (iii) Annotations for shifter scope, indicating which parts of a sentence are affected by the shifting.The entire dataset is publicly available. 1 | 0 |
Antonymy is one of the fundamental relations shaping the organization of the semantic lexicon and its identification is very challenging for computational models (Mohammad et al., 2008; Deese, 1965; Deese, 1964 ). Yet, antonymy is essential for many Natural Language Processing (NLP) applications, such as Information Retrieval (IR), Ontology Learning (OL), Machine Translation (MT), Sentiment Analysis (SA) and Dialogue Systems (Roth and Schulte im Walde, 2014; Mohammad et al., 2013) . In particular, the automatic identification of semantic opposition is a crucial component for the detection and generation of paraphrases (Marton et al., 2011) , the understanding of contradictions (de Marneffe et al., 2008) and the detection of humor (Mihalcea and Strapparava, 2005) .Several existing computational lexicons and thesauri explicitly encode antonymy, together with other semantic relations. Although such resources are often used to support the above mentioned NLP tasks, hand-coded lexicons and thesauri have low coverage and many scholars have shown their limits: Mohammad et al. (2013) , for example, have noticed that "more than 90% of the contrasting pairs in GRE closest-to-opposite questions are not listed as opposites in WordNet".The automatic identification of semantic relations is a core task in computational semantics. Distributional Semantic Models (DSMs) have often been exploited for their well known ability to identify semantically similar lexemes using corpus-derived co-occurrences encoded as distributional vectors (Santus et al., 2014a ; Baroni and Lenci, 2010; Turney and Pantel, 2010; Padó and Lapata, 2007; Sahlgren, 2006) . These models are based on the Distributional Hypothesis (Harris, 1954) and represent lexical semantic similarity in function of distributional similarity, which can be measured by vector cosine (Turney and Pantel, 2010) . However, these models are characterized by a major shortcoming. That is, they are not able to discriminate among different kinds of semantic relations linking distributionally similar lexemes. For instance, the nearest neighbors of castle in the vector space typically include hypernyms like building, co-hyponyms like house, meronyms like brick, antonyms like shack, together with other semantically related words. While impressive results have been achieved in the automatic identification of synonymy (Baroni and Lenci, 2010; Padó and Lapata, 2007) , methods for the identification of hypernymy (Santus et al., 2014a; Lenci and Benotto, 2012) and antonymy (Roth and Schulte im Walde, 2014; Mohammad et al. 2013) still need much work to achieve satisfying precision and coverage (Turney, 2008; Mohammad et al., 2008) . This is the reason why semisupervised pattern-based approaches have often been preferred to purely unsupervised DSMs (Pantel and Pennacchiotti, 2006; Hearst, 1992) .In this paper, we introduce APAnt, a new Average-Precision-based distributional measures that is able to successfully discriminate antonyms from synonyms, outperforming vector cosine and a baseline system based on the co-occurrence hypothesis, formulated by Charles and Miller in 1989 and confirmed in other studies, such as those of Justeson and Katz (1991) and Fellbaum (1995) .Our measure is based on a distributional interpretation of the so-called paradox of simultaneous similarity and difference between the antonyms (Cruse, 1986) . According to this paradox, antonyms are similar to synonyms in every dimension of meaning except one. Our hypothesis is that the different dimension of meaning is a salient one and it can be identified with DSMs and exploited for discriminating antonyms from synonyms.The rest of the paper is organized as follows. Section 2 gives the definition and illustrates the various types of antonyms. Section 3 gives a brief overview of related works. Section 4 presents the proposed APAnt measure. Section 5 shows the performance evaluation of the proposed measure. Section 6 is the conclusion. | 0 |
A dialogue state tracker is a core component in most of today's spoken dialogue systems (SDS). The goal of dialogue state tracking (DST) is to monitor the user's intentional states during the course of the conversation, and provide a compact representation, often called the dialogue states, for the dialogue manager (DM) to decide the next action to take.In task-oriented dialogues, or slot-filling dialogues in the simplistic form, the dialogue agent is tasked with helping the user achieve simple goals such as finding a restaurant or booking a train ticket. As the name itself suggests, a slot-filling dialogue is composed of a predefined set of slots that need to be filled through the conversation. The dialogue states in this case are therefore the values of these slot variables, which are essentially the search constraints the DM has to maintain in order to perform the database lookup.Traditionally in the research community, as exemplified in the dialogue state tracking challenge (DSTC) , which has become a standard evaluation framework for DST research, the dialogues are usually constrained by a fixed domain ontology, which essentially describes in detail all the possible values that each predefined slot can take. Having access to such an ontology can simplify the tracking problem in many ways, however, in many of the SDS applications we have built in the industry, such an ontology was not obtainable. Oftentimes, the backend databases are only exposed through an external API, which is owned and maintained by our partners. It is usually not possible to gain access to their data or enumerate all possible slot values in their knowledge base. Even if such lists or dictionaries exist, they can be very large in size and highly dynamic (e.g. new songs added, new restaurants opened etc.). It is therefore not amiable to many of the previously introduced DST approaches, which generally rely on classification over a fixed ontology or scoring each slot value pairs separately by enumerating the candidate list.In this paper, we will therefore focus on this particular aspect of the DST problem which has rarely been discussed in the community -namely how to perform state tracking in the absence of a comprehensive domain ontology and how to handle unknown slot values effectively.It is worth noting that end-to-end (E2E) modeling for task-oriented dialogue systems has become a popular trend (Williams and Zweig, 2016; Zhao and Eskenazi, 2016; , although most of them focus on E2E policy learning and language generation, and still rely on explicit dialogue states in their models. While fully E2E approaches which completely obviate explicit DST have been attempted (Bordes and Weston, 2016; Eric and Manning, 2017a,b; Dhingra et al., 2017) , their generality and scalability in real world applications remains to be seen. In reality, a dedicated DST component remains a central piece to most dialogue systems, even in most of the proclaimed E2E models. E2E approaches for DST, i.e. joint modeling of SLU and DST has also been presented in the literature (Henderson et al., 2014b,c; Mrksic et al., 2015; Zilka and Jurcicek, 2015; Perez and Liu, 2017; . In these methods, the conventional practice of having a separate spoken language understanding (SLU) module is replaced by various E2E architectures that couple SLU and DST altogether. They are sometimes called word based state tracking as the dialogue states are derived directly from word sequences as opposed to SLU outputs. In the absence of SLU to generate value candidates, most E2E trackers today can only operate with fixed value sets. To address this limitation, we introduce an E2E tracker that allows us to effectively handle unknown value sets. The proposed solution is based on the recently introduced pointer network (PtrNet) (Vinyals et al., 2015) , which essentially performs state tracking in an extractive fashion similar to the sequence labeling techniques commonly utilized for slot tagging in SLU (Tur and Mori, 2011) .Our proposed technique is similar in spirit as the recent work in (Rastogi et al., 2018) , which also targets the problem of unbounded and dynamic value sets. They introduce a sophisticated candidate generation strategy followed by a neural network based scoring mechanism for each candidate. Despite the similarity in the motivation, their system relies on SLU to generate value candidates, resulting in an extra module to maintain and potential error propagation as commonly faced by pipelined systems.The contributions of this paper are three-folds: Firstly, we target a very practical yet rarely investigated problem in DST, namely handling unknown slot values in the absence of a predefined ontology. Secondly, we describe a novel E2E architecture without SLU based on the PtrNet to perform state tracking. Thirdly, we also introduce an effective dropout technique for training the proposed model which drastically improves the recall rate of unknown slot values.The rest of the paper is structured as follows: We give a brief review of related work in the field in Section 2 and point out its limitations. The Ptr-Net and its proposed application in DST are described in Section 3. In Section 4, we demonstrate some caveats regarding the use of PtrNet and propose an additional classification module as a complementary component. The targeted dropout technique, which can be essential for generalization on some datasets, is described in Section 5. Experimental setup and results are presented in Section 6, followed by conclusions in Section 7. | 0 |
Sentence similarity computation plays an important role in text summarization and social network applications (Erkan et al., 2004; Jin et al., 2011) . The SemEval 2012 competition initiated a task targeted at Semantic Textual Similarity (STS) between sentence pairs (Agirre et al., 2012) . Given a set of sentence pairs, participants are required to assign to each sentence pair a similarity score.Because a sentence has only a limited amount of content words, it is difficult to determine sentence similarities. To solve this problem, Hatzivassiloglou et al. (1999) proposed to use linguistic features as indicators of text similarity to address the problem of sparse representation of sentences. Mihalcea et al. (2006) measured sentence similarity using component words in sentences. Li et al. (2006) proposed to incorporate the semantic vector and word order to calculate sentence similarity. Biemann et al.(2012) applied the log-linear regression model by combining the simple string based measures, for example, word ngrams and semantic similarity measures, for example, textual entailment. Similarly, Saric et al. (2012) used a support vector regression model which incorporates features computed from sentence pairs. The features are knowledge-and corpus-based word similarity, ngram overlaps, WordNet augmented word overlap, syntactic features and so on. Xu et al. (2012) combined semantic vectors with skip bigrams to determine sentence similarity, whereas the skip bigrams take into the sequential order between words.In our approach to the STS task, words in sentences are assigned with appropriate senses using their contexts. Sentence similarity is computed by calculating the number of shared senses in both sentences since it is reasonable to assume that similar sentences should have more overlapping senses.For the STS-TYPED task, variations might occur in author names, people involved, time expression and location. Thus, string kernel is applied to compute similarity between entities because it can capture variations between entities. Moreover, for the event similarity in STS-TYPED task, semantic relatedness between verbs is derived the WordNet.The rest of this paper is structured as follows. Section 2 describes sentence similarity using sense overlapping and string kernel. Section 3 gives the performance evaluation. Section 4 is the conclusion. | 0 |
The dissemination of statistical machine translation (SMT) systems in the professional translation industry is still limited by the lack of reliability of SMT outputs, the quality of which varies to a great extent. In this context, a critical piece of information would be for MT systems to assess their output translations with automatically derived quality measures. This problem is the focus of a shared task, the aim of which is to predict the quality of a translation without knowing any human reference(s).To the best of our knowledge, all approaches so far have tackled quality estimation as a supervised learning problem (He et al., 2010; Soricut and Echihabi, 2010; Specia et al., 2010; Specia, 2011) . A wide variety of features have been proposed, most of which can be described as loosely 'linguistic' features that describe the source sentence, the target sentence and the association between them (Callison-Burch et al., 2012) . Surprisingly enough, information used by the decoder to choose the best translation in the search space, such as its internal scores, have hardly been considered and never proved to be useful. Indeed, it is well-known that these scores are hard to interpret and to compare across hypotheses. Furthermore, mapping scores of a linear classifier (such as the scores estimated by MERT) into consistent probabilities is a difficult task (Platt, 2000; Lin et al., 2007) . This work aims at assessing whether information extracted from the decoder search space can help to predict the quality of a translation. Rather than using directly the decoder score, we propose to consider a finer level of information, the n-gram posterior probabilities that quantifies the probability for a given n-gram to be part of the system output. These probabilities can be directly interpreted as the confidence the system has for a given n-gram to be part of the translation. As they are directly derived from the number of hypotheses in the search space that contains this n-gram, these probabilities might be more reliable than the ones estimated from the decoder scores.We first quickly review, in Section 2, the n-gram posteriors introduced by (Gispert et al., 2013) and explain how they can be used in the QE task; we then describe, in Section 3 the different systems that have developed for our participation in the WMT'13 shared task on Quality Estimation and assess their performance in Section 4. | 0 |
Automatic summarization tasks are often addressed with statistical methods: a first type of approach, introduced by Kupiec et al.(1995) , involves using a set of features of different types to describe sentences, and supervised learning algorithms to learn an empirical model of how those features interact to identify important sentences. This kind of approach has been very popular in summarization; however the difficulty of this task often requires more complex representations, and different kinds of models to learn relevance in text have been proposed, such as discourse-based (Marcu, 1997) or network-based (Salton et al., 1997 ) models and many others. Domain knowledge usually is present in the choice of features and algorithms, but it is still an open issue how best to capture the domain knowledge required to identify what is relevant in the text; manual approaches to build knowledge bases tend to be te-dious, while automatic approaches require large amounts of training data and the result may still be inferior.In this paper we present our approach to summarize legal documents, using knowledge acquisition to combine different summarization techniques. In summarization, different kinds of information can be taken in account to locate important content, at the sentence level (e.g. particular terms or patterns), at the document level (e.g. frequency information, discourse information) and at the collection level (e.g. document frequencies or citation analysis); however, the way such attributes interact is likely to depend on the context of specific cases. For this reason we have developed a set of methods for identifying important content, and we propose the creation of a Knowledge Base (KB) that specifies which content should be used in different contexts, and how this should be combined. We propose to use the Ripple Down Rules (RDR) (Compton and Jansen, 1990 ) methodology to build this knowledge base: RDR has already proven to be a very effective way of building KBs, had has been used successfully in several NLP task (see Section 2). This kind of approach differs from the dominant supervised learning approach, in which we first annotate text to identify relevant fragments, and then we use supervised learning algorithms to learn a model; one example in the legal domain being the work of Hachey and Grover (2006) . Our approach eliminates the need for separate manual annotation of text, as the rules are built by a human who judges the relevance of text and directly creates the set of rules as the one process, rather than annotating the text and then separately tuning the learning model. We apply this approach to the summarization of legal case reports, a domain which has an increasing need for automatic text processing, to cope with the large body of documents that is case law. COSTS -proper approach to admiralty and commercial litigation -goods transported under bill of lading incorporating Himalaya clause -shipper and consignee sued ship owner and stevedore for damage to cargo -stevedore successful in obtaining consent orders on motion dismissing proceedings against it based on Himalaya clause -stevedore not furnishing critical evidence or information until after motion filed -whether stevedore should have its costs -importance of parties cooperating to identify the real issues in dispute -duty to resolve uncontentious issues at an early stage of litigation -stevedore awarded 75% of its costs of the proceedings MIGRATION -partner visa -appellant sought to prove domestic violence by the provision of statutory declarations made under State legislation -"statutory declaration" defined by the Migration Regulations 1994 (Cth) to mean a declaration "under" the Statutory Declarations Act 1959 (Cth) in Div 1.5 -contrary intention in reg 1.21 as to the inclusion of State declarations under s 27 of the Acts Interpretation Actstatutory declaration made under State legislation is not a statutory declaration "under" the Commonwealth Act -appeal dismissed Countries with "common law" traditions, such as Australia, the UK and the USA, rely heavily on the concept of precedence: on how the courts have interpreted the law in individual cases, in a process that is known as stare decisis (Moens, 2007) , so legal professionals: lawyers, judges and scholars, have to deal with large volumes of past court decisions.Automatic summarization can greatly enhance access to legal repositories; however, legal cases, rather than summaries, often contain lists of catchphrases: phrases that present the important legal points of a case. The presence of catchphrases can aid research of case law, as they give a quick impression of what the case is about: "the function of catchwords is to give a summary classification of the matters dealt with in a case. [...] Their purpose is to tell the researcher whether there is likely to be anything in the case relevant to the research topic" (Olsson, 1999) . For this reason, rather than constructing summaries, we aim at extracting catchphrases from the full text of a case report. Examples of catchphrases from two case reports are shown in Table 1 .In this paper we present our approach towards automatic catchphrase extraction from legal case reports, using a knowledge acquisition approach according to which rules are manually created to combine a range of diverse methods to locate catchphrase candidates in the text. | 0 |
To date we have relied on the basic computing architecture as laid out by Alan Turing during the late 1940s. Little in essence has changed concerning the basic framework, encompassing a CPU, processing instructions and volatile and non-volatile memory stores. It has served us quite well and we can all see the benefits around us in our daily lives from automatic ticket machines to tablets. Nevertheless this approach has many practical limitations when it comes to trying to address the complex world of intelligence and that uniquely human and idiosyncratic method of verbal communication that we call language. Alan Turing postulated the 'Turing test': a test of a computing device's ability to exhibit intelligent behaviour, equivalent to or indistinguishable from that of a human being. We have recently seen examples of systems that purport to have passed this test (IBM's Deep Blue in terms of chess and Jeopardy). | 0 |
Emotion analysis as well as recommendation technology has drawn a lot attention in the natural language processing research community. The development of fundamental approaches as well as applications has been proposed (Das, 2011; Sarwar et al., 2001; Zheng et al., 2010) . However, most of them were Internet applications, and to the best knowledge of the authors, these technologies have not yet been involved in the ambient creation. To create an intelligent living space, some researchers utilized the facial expression and speech recognizer to detect emotions (Busso et al., 2004) , but then the accompanied cameras and microphones were necessary. Some researchers tried to use sensors to watch the heart beat and the body temperature of residents to know their current emotion for further applications, but the problem was that users had to wear sensors and it was inconvenient. Instead of watching body signals, we postulate that the communications among people is one of the important factors to influence their emotions. Therefore, we tried to find clues from the textual conversations of the residents in order to detect their psychological state.There are many ways to categorize emotions. Different emotion states were used for experiments in previous research (Bellegarda, 2010) . To find suitable categories of emotions, we adopted the three-layered emotion hierarchy proposed by Parrott (2001) 1 . Six emotions are in the first layer, including love, joy, surprise, anger, sadness and fear. The second layer includes 25 emotions, and the third layer includes 135 emotions. Using this hierarchical classification benefits the system. We can categorize emotions from rough to fine granularities and degrade to the upper level when the experimental materials are insufficient. How to map categories in other researches to ours becomes clearer, and annotators have more information when marking their current emotion.As to the music, most researchers looked for the emotions in songs or rhythms (Yang and Chen, 2011; Zbikowski, 2011) . They classified music into different emotional categories and developed the system to tell what emotion a song might bring to a listener. However, if the aim is to create a comfortable ambient, what songs a person in a certain emotional state wants to listen to becomes the question. A happy user does not always enjoy happy songs, and vice versa. In this case, the technology developed in the previous work did not meet the new requirement.IlluMe was designed for a small space personal environment. We expect that users would like to use it because this system could interactively respond to their personal status to provide a feeling of the companion. We view the IlluMe system as a realization of detecting emotions from users' textual conversations and then recommending the best ambient accordingly. There are three major contributions in the development of the system. First, a corpus for ambient creation according to emotions was constructed. Second, IlluMe demonstrates a way to apply the state of the art technology of emotion analysis and recommendation to create an intelligent living space. Third, along with the developed technology, several further applications utilizing the components of IlluMe become feasible. | 0 |
In the last decade, there has been substantial growth in the area of digital psychiatry. Automated methods using natural language processing have been able to detect mental health disorders based on a person's language in a variety of data types, such as social media (Mowery et al., 2016; Morales et al., 2017) , speech (Iter et al., 2018) and other writings (Kayi et al., 2017; Just et al., 2019) . As in-person clinical visits are made increasingly difficult by socioeconomic barriers and public-health crises, such as COVID-19, tools for measuring mental wellness using implicit signal become more important than ever (Abdel-Rahman, 2019; Bojdani et al., 2020) .Early work in this area leveraged traditional human subject studies in which individuals with clinically validated psychiatric diagnoses volunteered their language data to train classifiers and perform quantitative analyses (Rude et al., 2004; Jarrold et al., 2010) . In an effort to model larger, more diverse populations with less overhead, a substantial portion of research in the last decade has instead explored data annotated via automated mechanisms (Coppersmith et al., 2015a; Winata et al., 2018) .Studies leveraging proxy-based annotations have supported their design by demonstrating alignment with existing psychological theory regarding language usage by individuals living with a mental health disorder (Cavazos-Rehg et al., 2016; Vedula and Parthasarathy, 2017) . For example, feature analyses have highlighted higher amounts of negative affect and increased personal pronoun prevalence amongst depressed individuals (Park et al., 2012; De Choudhury et al., 2013) . Given these consistencies, the field has largely turned its attention toward optimizing predictive power via state of the art models (Orabi et al., 2018; Song et al., 2018) .The ultimate goal of these efforts has been threefold-to better personalize psychiatric care, to enable early intervention, and to monitor population-level health outcomes in real time. Nonetheless, research has largely trudged forward without stopping to ask one critical question: do models of mental health conditions trained on automatically annotated social media data actually generalize to new data platforms and populations?Typically, the answer is no-or at least not without modification. Performance loss is to be expected in a variety of scenarios due to underlying distributional shifts, e.g. domain transfer (Shimodaira, 2000; Subbaswamy and Saria, 2020) . Accordingly, substantial effort has been devoted to developing computational methods for domain adaptation (Imran et al., 2016; Chu and Wang, 2018) . Outcomes from this work often provide a solid foundation for use across multiple natural language processing tasks (Daume III and Marcu, 2006) . However, it is unclear to what extent factors specific to mental health require tailored intervention.In this study, we demonstrate that at a baseline, proxy-based models of mental health status do not transfer well to other datasets annotated via automated mechanisms. Supported by five widely used datasets for predicting depression in social media users from both Reddit and Twitter, we present a combination of qualitative and quantitative experiments to identify troublesome confounds that lead to poor predictive generalization in the mental health research space. We then enumerate evidencebased recommendations for future mental health dataset construction.Ethical Considerations. Given the sensitive nature of data containing mental health status of individuals, additional precautions based on guidance from Benton et al. (2017a) were taken during all data collection and analysis procedures. Data sourced from external research groups was retrieved according to each dataset's respective data usage policy. The research was deemed exempt from review by our Institutional Review Board (IRB) under 45 CFR § 46.104. | 0 |
Recent research into the nature of morphology has demonstrated the feasibility of several approaches to the definition of a language's inflectional system. Central to these approaches is the notion of an inflectional paradigm. In general terms, the inflectional paradigm of a lexeme L can be regarded as a set of cells, where each cell is the pairing of L with a set of morphosyntactic properties, and each cell has a word form as its realization; for instance, the paradigm of the lexeme walk includes cells such as <WALK, {3rd singular present indicative}> and <WALK, {past}>, whose realizations are the word forms walks and walked.Given this notion, one approach to the definition of a language's inflectional system is the realizational approach (Matthews 1972 , Zwicky 1985 , Anderson 1992 , Corbett & Fraser 1993 , Stump 2001 ; in this approach, each word form in a lexeme's paradigm is deduced from the lexical and morphosyntactic properties of the cell that it realizes by means of a system of morphological rules. For instance, the word form walks is deduced from the cell <WALK, {3rd singular present indicative}> by means of the rule of -s suffixation, which applies to the root walk of the lexeme WALK to express the property set {3rd singular present indicative}.We apply the realizational approach to the study of Yorùbá verbs. Yorùbá, an Edekiri language of the Niger-Congo family (Gordon 2005) , is the native language of more than 30 million people in West Africa. Although it has many dialects, all speakers can communicate effectively using Standard Yorùbá (SY), which is used in education, mass media and everyday communication (Adéwo . lé 1988) .We represent our realizational analysis of SY in the KATR formalism (Finkel, Shen, Stump & Thesayi 2002) . KATR is based on DATR, a formal language for representing lexical knowledge designed and implemented by Roger Evans and Gerald Gazdar (Evans & Gazdar 1989) . Our information about SY is primarily due to the expertise of the second author.This research is part of a larger effort aimed at elucidating the morphological structure of natural languages. In particular, we are interested in identifying the ways in which default-inheritance relations describe a language's morphology as well as the theoretical relevance of the traditional notion of principal parts. To this end, we have applied similar techniques to Hebrew (Finkel & Stump 2007) , Latin (Finkel & Stump to appear, 2009b) , and French (Finkel & Stump to appear, 2009a) .As we demonstrate below, the realizational approach leads to a KATR theory that provides a clear picture of the morphology of SY verbs. Different audiences might find different aspects of it attractive.• A linguist can peruse the theory to gain an appreciation for the structure of SY verbs, with all exceptional cases clearly marked either by morphophonological diacritics or by rules of sandhi, which are segregated from all the other rules.• A teacher of the language can use the theory as a foundation for organizing lessons in morphology.• A student of the language can suggest verb roots and use the theory to generate all the appropriate forms, instead of locating the right paradigm in a book and substituting consonants. | 0 |
Sign language recognition (SLR) is a complex problem. Sign languages are, after all, complex visual languages. Generally, one can say that sign languages have five parameters. A sign is distinguished by hand shape, hand orientation, movement, location, and non-manual components such as mouth shape and eyebrow shape. However, these parameters do not necessarily fully identify signs: two signs can have the same execution but different meaning. Furthermore, identical signs are often executed differently based on several factors, such as age, gender, the dominant hand, and dialects. Additionally, there is a high degree of co-articulation in sign languages: both hands can produce different signs at the same time. SLR is typically tackled using machine learning approaches.Deep learning in particular has proven to be very powerful for tasks such as image classification (Krizhevsky et al., 2012) and neural translation (Vaswani et al., 2017) . Deep learning algorithms require large datasets in order to learn meaningful representations that generalize well to unseen data, especially for complex problems such as SLR. While large video corpora are available for several sign languages, they consist mostly of unlabeled data. Labeling the sign language corpora is a time-consuming process that requires the annotator to know sign language and its specific phonetic and phonological properties. As a consequence, the portion of a video corpus that is labeled grows only slowly. Larger datasets exist (Chai et al., 2014; Huang et al., 2018; Vaezi Joze and Koller, 2019) , but several consist of recordings of persons performing signs in repetition (Ronchetti et al., 2016; Ko et al., 2018) . These datasets are often not representative of real world sign language, as they contain artificial repetitions of isolated signs (Bragg et al., 2019) . Because the accuracy -the measure that is most commonly used to assess the performance of sign classification systems -is saturated on such datasets when using deep learning methods (Konstantinidis et al., 2018; Ko et al., 2018) , it is more challenging and interesting to perform SLR on real sign language data. This also paves the way for sign language translation in the future. The question can be posed if a deep learning system can be used to speed up the annotation process of sign language corpora, in order to obtain more labeled data. This could for example be done by creating a suggestion system for corpus annotators, that provides a list of likely glosses given a selected video fragment of a sign. In this work, we use a Long Short-Term Memory (LSTM) network as a baseline. We then present and compare three methods based on the transformer network architecture, that consistently outperform this baseline. The methods will be applied in the creation of the proposed suggestion system. | 0 |
In the last twenty years, experiments in Artificial Language Learning (ALL) have become increasingly popular for the study of the basic mechanisms that operate when subjects are exposed to language-like stimuli. Thanks to these experiments, we know that 8 month old infants can segment a speech stream by extracting statistical information of the input, such as the transitional probabilities between adjacent syllables (Saffran et al. (1996a) ; Aslin et al. (1998) ). This ability also seems to be present in human adults (Saffran et al., 1996b) , and to some extent in nonhuman animals like cotton-top tamarins (Hauser et al., 2001 ) and rats (Toro and Trobalón, 2005) .Even though this statistical mechanism is well attested for segmentation, it has been claimed that it does not suffice for generalization to novel stimuli or rule learning 1 . Ignited by a study by Marcus et al. (1999) , which postulated the existence of an additional rule-based mechanism for generalization, a vigorous debate emerged around the question of whether the evidence from ALL-experiments supports the existence of a specialized mechanism for generalization (Peña et al. (2002) ; Onnis et al. (2005) ; En-dress&Bonatti (2007) ; Frost&Monaghan (2016); Endress&Bonatti (2016)), echoing earlier debates about the supposed dichotomy between rules and statistics (Chomsky, 1957; Rumelhart and Mc-Clelland, 1986; Pinker and Prince, 1988; Pereira, 2000) .From a Natural Language Processing perspective, the dichotomy between rules and statistics is unhelpful. In this paper, we therefore propose a different conceptualization of the steps involved in generalization in ALL. In the following sections, we will first review some of the experimental data that has been interpreted as evidence for an additional generalization mechanism (Peña et al. (2002) ; Endress&Bonatti (2007) ; Frost&Monaghan (2016)). We then reframe the interpretation of those results with our 3-step approach, a proposal of the main steps that are required for generalization, involving: (i) memorization of segments of the input, (ii) computation of the probability for unseen sequences, and (iii) distribution of this probability among particular unseen sequences. We model the first step with the Retention&Recognition model (Alhama et al., 2016) . We propose that a rational charac-terization of the second step can be accomplished with the use of smoothing techniques (which we further demonstrate with the use of the Simple Good-Turing method, (Good&Turing (1953) ; Gale (1995) ). We then argue that the modelling results shown in these two steps already account for the key aspects of the experimental data; and importantly, it removes the need to postulate an additional, separate generalization mechanism. Peña et al. (2002) conduct a series of Artificial Language Learning experiments in which Frenchspeaking adults are familiarized to a synthesized speech stream consisting of a sequence of artificial words. Each of these words contains three syllables A i XC i such that the A i syllable always cooccurs with the C i syllable (as indicated by the subindex i). This forms a consistent pattern (a "rule") consisting in a non-adjacent dependency between A i and C i , with a middle syllable X that varies. The order of the words in the stream is randomized, with the constraint that words do not appear consecutively if they either: (i) belong to the same "family" (i.e., they have the same A i and C i syllables), or (ii) they have the same middle syllable X. stream pulikiberagatafodupuraki.. words AiXCi puliki, beraga, tafodu, ...CjAiX, XCiAj kibera, ragata, gatafo, ...AiY Ci pubeki, beduga, takidu, ...AiY Cj pubedu, betaki, tapuga, ...AiZCi puveki, bezoga, tathidu, ... After the familiarization phase, the participants respond a two-alternative forced choice test. The two-alternatives involve a word vs. a part-word, or a word vs. a rule-word, and the participants are asked to judge which item seemed to them more like a word of the imaginary language they had listened to. A part-word is an ill-segmented sequence of the form XC i A j or C i A j X; a choice for a part-word over a word is assumed to indicate that the word was not correctly extracted from the stream. A rule-word is a rule-obeying sequence that involves a "novel" middle syllable Y (mean-ing that Y did not appear in the stream as an X, although it did appear as an A or C). Rule-words are therefore a particular generalization from words. Table 1 shows examples of these type of test items.In their baseline experiment, the authors expose the participants to a 10 minute stream of A i XC i words. In the subsequent test phase, the subjects show a significant preference for words over part-words, proving that the words could be segmented out of the familiarization stream. In a second experiment the same setup is used, with the exception that the test now involves a choice between a part-word and a rule-word. The subjects' responses in this experiment do not show a significant preference for either part-words or rulewords, suggesting that participants do not generalize to novel grammatical sequences. However, when the authors, in a third experiment, insert micropauses of 25ms between the words, the participants do show a preference for rule-words over part-words. A shorter familiarization (2 minutes) containing micropauses also results in a preference for rule-words; in contrast, a longer familiarization (30 minutes) without the micropauses results in a preference for part-words. In short, the presence of micropauses seems to facilitate generalization to rule-words, while the amount of exposure time correlates negatively with this capacity. Endress and Bonatti (2007) report a range of experiments with the same familiarization procedure used by Peña et al. However, their test for generalization is based on class-words: unseen sequences that start with a syllable of class "A" and end with a syllable of class "C", but with A and C not appearing in the same triplet in the familiarization (and therefore not forming a nonadjacent dependency).From the extensive list of experiments conducted by the authors, we will refer only to those that test the preference between words and classwords, for different amounts of exposure time. The results for those experiments (illustrated in figure 1) also show that the preference for generalized sequences decreases with exposure time. For short exposures (2 and 10 minutes) there is a significant preference for class-words; when the exposure time is increased to 30 minutes, there is no preference for either type of sequence, and in a 60 minutes exposure, the preference reverses to partwords. Peña et al. (2002) and Endress&Bonatti (2007) , for different exposure times to the familiarization stream. micropauses are not essential for rule-like generalization to occur. Rather, the degree of generalization depends on the type of test sequences. The authors notice that the middle syllables used in rule-words might actually discourage generalization, since those syllables appear in a different position in the stream. Therefore, they test their participants with rule*-words: sequences of the form A i ZC i , where A i and C i co-occur in the stream, and Z does not appear. After a 10 minute exposure without pauses, participants show a clear preference for the rule*-words over part-words of the formZC i A j or C i A j Z.The pattern of results is complex, but we can identify the following key findings: (i) generalization for a stream without pauses is only manifested for rule*-words, but not for rule-words nor class-words; (ii) the preference for rule-words and class-words is boosted if micropauses are present; (iii) increasing the amount of exposure time correlates negatively with generalization to rule-words and class-words (with differences depending on the type of generalization and the presence of micropauses, as can be seen in figure 1). This last phenomenon, which we call time effect, is precisely the aspect we want to explain with our model. (Note, in figure 1, that in the case of rulewords and pauses, the amount of generalization increases a tiny bit with exposure time, contrary to the time effect. We cannot test whether this is a significant difference, since we do not have access to the data. Endress&Bonatti, however, provided convincing statistical analysis supporting a signif-icant inverse correlation between exposure time and generalization to class-words).3 Understanding the generalization mechanism: a 3-step approach Peña et al. interpret their findings as support for the theory that there are at least two mechanisms, which get activated in the human brain based on different cues in the input. Endress and Bonatti adopt that conclusion (and name it the More-than-One-Mechanism hypothesis, or MoM), and moreover claim that this additional mechanism cannot be based on statistical computations. The authors predict that statistical learning would benefit from increasing the amount of exposure:"If participants compute the generalizations by a single associationist mechanism, then they should benefit from an increase in exposure, because longer experience should strengthen the representations built by associative learning (whatever these representations may be)." (Endress and Bonatti, 2007) We think this argument is based on a wrong premise: stronger representations do not necessarily entail greater generalization. On the contrary, we argue that even very basic models of statistical smoothing make the opposite prediction. To demonstrate this in a model that can be compared to empirical data, we propose to think about the process of generalization in ALL as involving the following steps (illustrated also in figure 2): (i) Memorization: Build up a memory store of segments with frequency information (i.e., compute subjective frequencies).(ii) Quantification of the propensity to generalize: Depending on the frequency information from (i), decide how likely are other unseen types.(iii) Distribution of probability over possible generalizations: Distribute the probability for unseen types computed in (ii), assigning a probability to each generalized sequence.Crucially, we believe that step (ii) has been neglected in ALL models of generalization. This step accounts for the fact that generalization is not only based on the particular structure underlying the stimuli, but also depends on the statistical properties of the input.At this point, we can already reassess the MoM hypothesis: more exposure time does entail better representation of the stimuli (as would be reflected in step (i)), but the impact of exposure time on generalization depends on the model used for step (ii). Next, we show that a cognitive model of step (i) and a rational statistical model of step (ii) already account for the time effect. | 0 |
JANUS is a multi-lingual speech-to-speech translation system designed to facilitate communication between two parties engaged in a spontaneous conversation in a limited domain. In this paper we describe the current design and performance of the machine translation module of our system. The analysis of spontaneous speech requires dealing with problenls such as speech disfluencies, looser notions of grammaticality and the lack of clearly marked sentence boundaries. These problems are further exacerbated by errors of the speech recognizer. We describe how our machine translation system is designed to effectively handle these and other problems, hi an attempt to achieve both robustness and translation accuracy we use two different translation components: the (JLlt. module, designed to be more accurate, and the Phoenix module, designed to be more robust. Both modules follow an interlingua-based approach. The translation modules are designed to be language-independent in the sense that they each consist of a general processor that applies independently specified knowledge about different languages. This facilitates the easy adaptation of the system to new languages and domains. We analyze the strengths and weaknesses of each of the translation approaches and describe our work on combining them. Our current system is designed to translate spontaneous dialogues in the Scheduling domain, with English, Spanish and German as both source and target languages. A recent focus has been on developing a detailed end-to-end evaluation procedure to measure the performance and effectiveness of the system. We describe this procedure in the latter part of the paper, and present our most recent Spanish-to-English performance evaluation results. | 0 |
Personality describes characteristics which are central to human behaviour, and has implications for social interactions: It can affect performance on collaborative processes, and can increase engagement when incorporated within virtual agents (Hernault et al., 2008) . In addition, personality has also been shown to influence linguistic style, both in written and spoken language (Pennebaker and King, 1999; Gill and Oberlander, 2002) . Whilst individuals often possess individual styles of self-expression, such as those influenced by personality, in a conversation they may align or match the linguistic style of their partner: For example, by entraining, or converging, on a mutual vocabulary. Such alignment is associated with increased familiarity, trust, and task success (Shepard et al., 2001) . People also adjust their linguistic styles when interacting with computers, and this affects their perceptions of the interaction (Porzel et al., 2006) . However, when humans -or machines -are faced with a choice of matching the language of their conversational partner, this often raises a conflict: matching the language of an interlocutor may mean subduing one's own linguistic style. Better understanding these processes relating to language choice and interpersonal perception can inform our knowledge of human behaviour, but also have important implications for the design of dialogue systems and user interfaces.In this paper, we present and evaluate novel automated natural language generation techniques, via the Critical Agent Dialogue system version 2 (CRAG 2), which enable us to generate dynamic, short-term alignment effects along with stable, longterm personality effects. We use it to investigate the following questions: Can personality be accurately judged from short, automatically generated dialogues? What are the effects of alignment between characters? How is the quality of the characters' relationship perceived? Additionally, in our evaluation study we examine perceptions of the different forms of alignment present in the dialogues, for example at the word, phrase or polarity levels. In the following we review relevant literature, before describing the CRAG 2 system and experimental method, and then presenting our results and discussion. | 0 |
Both language modeling (Wu and Khudanpur, 2003; Mikolov et al., 2010; Bengio et al., 2003) and text generation (Axelrod et al., 2011) boil down to modeling the conditional probability of a word given the proceeding words. Previously, it is mostly done through purely memory-based approaches, such as n-grams, which cannot deal with long sequences and has to use some heuristics (called smoothing) for rare ones. Another family of methods are based on distributed representations of words, which is usually tied with a neural-network (NN) architecture for estimating the conditional probabilities of words.Two categories of neural networks have been used for language modeling: 1) recurrent neural networks (RNN), and 2) feedfoward network (FFN):• The RNN-based models, including its variants like LSTM, enjoy more popularity, mainly due to their flexible structures for processing word sequences of arbitrary lengths, and their recent empirical success Graves, 2013) . We however argue that RNNs, with their power built on the recursive use of a relatively simple computation units, are forced to make greedy summarization of the history and consequently not efficient on modeling word sequences, which clearly have a bottom-up structures.• The FFN-based models, on the other hand, avoid this difficulty by feeding directly on the history. However, the FFNs are built on fully-connected networks, rendering them inefficient on capturing local structures of languages. Moreover their "rigid" architectures make it futile to handle the great variety of patterns in long range correlations of words.We propose a novel convolutional architecture, named genCNN, as a model that can efficiently combine local and long range structures of language for the purpose of modeling conditional probabilities. genCNN can be directly used in generating a word sequence (i.e., αCNN βCNN βCNN … "sandwich"? / / I was starving after this long meeting, so I rushed to wal-mart to buy a history: prediction: Figure 1 : The overall diagram of a genCNN. Here "/" stands for a zero padding. In this example, each CNN component covers 6 words, while in practice the coverage is 30-40 words. text generation) or evaluating the likelihood of word sequences (i.e., language modeling). We also show the empirical superiority of genCNN on both tasks over traditional n-grams and its RNN or FFN counterparts.Notations: We will use V to denote the vocabulary, e t (∈ {1, • • • , |V|}) to denote the t th word in a sequence e 1:tt if the sequence is further indexed by n. | 0 |
As noted in for instance Richards and Underwood (1985) , Jönsson and Dahlbäck (1988) and Wooffitt et al. (1997) , people tend to behave differently when interacting with a machine as opposed to a human being. These differences are found in various aspects of the dialog, such as the type of request formulation, the frequency of response token, the use of greetings and the organization of the opening and closing sequences, to mention a few. As a consequence of these findings, so-called Wizard of Oz experiments (abbrev. WOZ) are widely used for collecting data about how people interact with computer systems. In a Wizard of Oz experiment the subjects are led to believe that they are interacting with a computer when they are in fact interacting with a human being (a wizard). The wizard can act as a speech synthesizer, speech recognizer, and/or perform various tasks which will eventually be performed by the future system. It is vital that the subjects really think they are communicating with an implemented system in order to obtain reliable data concerning human-computer interaction. The findings in WOZ experiments can serve as an important guide in further development and design of the system (Dahlbäck et al., 1993) .In this paper we investigate the methodology used in WOZ experiments to see how various factors can influence the results. We will start by introducing the experiment done by Richards and Underwood (1985) . Then a similar experiment performed in Trondheim will be presented. The results from this experiment will serve as our stance for questioning the claim made in Richards and Underwood (1985) . We will also use data material from human-human dialogs to support our claim. | 0 |
Many complex speech and natural language processing (NLP) pipelines such as Automatic Speech Recognition (ASR) and Statistical Machine Translation (SMT) systems store alternative hypotheses produced at various stages of processing as weighted acyclic automata, also known as lattices. Each lattice stores a large number of hypotheses along with the raw system scores assigned to them. While single-best hypothesis is typically what is desired at the end of the processing, it is often beneficial to consider a large number of weighted hypotheses at earlier stages of the pipeline to hedge against errors introduced by various subcomponents. Standard ASR and SMT techniques like discriminative training, rescoring with complex models and Minimum Bayes-Risk (MBR) decoding rely on lattices to represent intermediate system hypotheses that will be further processed to improve models or system output. For instance, lattice based MBR decoding has been shown to give moderate yet consistent gains in performance over conventional MAP decoding in a number of speech and NLP applications including ASR (Goel and Byrne, 2000) and SMT (Tromble et al., 2008; Blackwood et al., 2010; de Gispert et al., 2013) .Most lattice-based techniques employed by speech and NLP systems make use of posterior quantities computed from probabilistic lattices. In this paper, we are interested in two such posterior quantities: i) n-gram expected count, the expected number of occurrences of a particular n-gram in a lattice, and ii) n-gram posterior probability, the total probability of accepting paths that include a particular n-gram. Expected counts have applications in the estimation of language model statistics from probabilistic input such as ASR lattices (Allauzen et al., 2003) and the estimation term frequencies from spoken corpora while posterior probabilities come up in MBR decoding of SMT lattices (Tromble et al., 2008) , relevance ranking of spoken utterances and the estimation of document frequencies from spoken corpora (Karakos et al., 2011; Can and Narayanan, 2013) .The expected count c(x|A) of n-gram x given lattice A is defined asEQUATIONwhere # y (x) is the number of occurrences of n-gram x in hypothesis y and p(y|A) is the posterior probability of hypothesis y given lattice A. Similarly, the posterior probability p(x|A) of n-gram x given lattice A is defined asEQUATIONwhere 1 y (x) is an indicator function taking the value 1 when hypothesis y includes n-gram x and 0 otherwise. While it is straightforward to compute these posterior quantities from weighted nbest lists by examining each hypothesis separately and keeping a separate accumulator for each observed n-gram type, it is infeasible to do the same with lattices due to the sheer number of hypotheses stored. There are efficient algorithms in literature (Allauzen et al., 2003; Allauzen et al., 2004) for computing n-gram expected counts from weighted automata that rely on weighted finite state transducer operations to reduce the computation to a sum over n-gram occurrences eliminating the need for an explicit sum over accepting paths. The rather innocent looking difference between Equations 1 and 2, # y (x) vs. 1 y (x), makes it hard to develop similar algorithms for computing n-gram posteriors from weighted automata since the summation of probabilities has to be carried out over paths rather than n-gram occurrences (Blackwood et al., 2010; de Gispert et al., 2013) . The problem of computing n-gram posteriors from lattices has been addressed by a number of recent works (Tromble et al., 2008; Allauzen et al., 2010; Blackwood et al., 2010; de Gispert et al., 2013) in the context of lattice-based MBR for SMT. In these works, it has been reported that the time required for lattice MBR decoding is dominated by the time required for computing n-gram posteriors. Our interest in computing n-gram posteriors from lattices stems from its potential applications in spoken content retrieval (Chelba et al., 2008; Karakos et al., 2011; Can and Narayanan, 2013) . Computation of document frequency statistics from spoken corpora relies on estimating ngram posteriors from ASR lattices. In this context, a spoken document is simply a collection of ASR lattices. The n-grams of interest can be word, syllable, morph or phoneme sequences. Unlike in the case of lattice-based MBR for SMT where the n-grams of interest are relatively short -typically up to 4-grams -, the n-grams we are interested in are in many instances relatively long sequences of subword units.In this paper, we present an efficient algorithm for computing the posterior probabilities of all ngrams in a lattice and constructing a minimal deterministic weighted finite-state automaton associating each n-gram with its posterior for efficient storage and retrieval. Our n-gram posterior computation algorithm builds upon the custom forward procedure described in (de Gispert et al., 2013) and introduces a number of refinements to significantly improve the time and space requirements:• The custom forward procedure described in (de Gispert et al., 2013) computes unigram posteriors from an input lattice. Higher order n-gram posteriors are computed by first transducing the input lattice to an n-gram lattice using an order mapping transducer and then running the custom forward procedure on this higher order lattice. We reformulate the custom forward procedure as a dynamic programming algorithm that computes posteriors for successively longer n-grams and reuses the forward scores computed for the previous order. This reformulation subsumes the transduction of input lattices to n-gram lattices and obviates the need for constructing and applying order mapping transducers.• Comparing Eq. 1 with Eq. 2, we can observe that posterior probability and expected count are equivalent for an n-gram that do not repeat on any path of the input lattice. The key idea behind our algorithm is to limit the costly posterior computation to only those ngrams that can potentially repeat on some path of the input lattice. We keep track of repeating n-grams of order n and use a simple impossibility argument to significantly reduce the number of n-grams of order n + 1 for which posterior computation will be performed. The posteriors for the remaining n-grams are replaced with expected counts. This filtering of n-grams introduces a slight bookkeeping overhead but in return dramatically reduces the runtime and memory requirements for long n-grams.• We store the posteriors for n-grams that can potentially repeat on some path of the input lattice in a weighted prefix tree that we construct on the fly. Once that is done, we com- SEMIRING SET ⊕ ⊗ 0 1 Boolean {0, 1} ∨ ∧ 0 1 Probability R + ∪ {+∞} + × 0 1 Log R ∪ {−∞, +∞} ⊕ log + +∞ 0 Tropical R ∪ {−∞, +∞} min + +∞ 0 a ⊕ log b = − log(e −a + e −b )pute the expected counts for all n-grams in the input lattice and represent them as a minimal deterministic weighted finite-state automaton, known as a factor automaton (Allauzen et al., 2004; , using the approach described in (Allauzen et al., 2004 ). Finally we use general weighted automata algorithms to merge the weighted factor automaton representing expected counts with the weighted prefix tree representing posteriors to obtain a weighted factor automaton representing posteriors that can be used for efficient storage and retrieval. | 0 |
Fine-grained propaganda detection is a new approach to tackling online misinformation, highlighting instances of propaganda techniques on the word level. These techniques are used in textual communication in order to encourage certain beliefs, but instead of straightforward presentation of arguments, they rely on psychological manipulation, logical fallacies or emotion elicitation.There are general-purpose natural language processing (NLP) methods that could be used for automatic detection of such text fragments. The challenge here is that they require large amounts of training data, which are laborious to produce. However, propaganda techniques are often related to other misinformation challenges, for which large datasets do exist, e.g. credibility assessment or fake news detection.In the present study we aim to investigate how this connection can be used in the multi-task learning (MTL) framework. We show how the performance of multi-label token-level propaganda detection within shared task 6 at SemEval-2021 can be improved by building neural architectures that are also trained to solve other tasks: singlelabel propaganda detection from SemEval-2020 and document-level credibility assessment based on a fake news corpus. We check different MTL scenarios (parallel and sequential) and show which aspects of the model benefit the most from this approach. | 0 |
As an important fine-grained subtask in the field of sentiment analysis, Aspect-Based Sentiment Classification (ABSC) aims to detect the sentiment polarities of aspect terms mentioned in a review sentence. For example, in Table 1 , given the aspect term food, it is expected to identify its corresponding sentiment polarity as positive. The main limitation of ABSC lies in that aspect terms need to be annotated before aspect sentiment classification, which is not applicable to real applications. To address this problem, many studies have been proposed to explore Aspect Term-based Sentiment Analysis (ATSA), which performs aspect term extraction and aspect sentiment classification jointly (Mitchell et al., 2013; Zhang et al., 2015; Luo et al., 2019; Hu et al., 2019) .However, ATSA still suffers from a major obstacle that it only considers explicit aspects but completely ignore implicit aspects in text. Take the review in Table 1 as an example. Although the second clause does not mention any aspect term, it clearly expresses user's negative sentiment towards the service. More importantly, we observe that existing benchmark datasets contain a large amount of such reviews. For instance, Table 2 shows the proportion of reviews with implicit aspects in the Restaurant dataset from SemEval 2015 and 2016, and it is clear that nearly 25% of the examples contain implicit aspects. Since these examples also convey valuable information, we should no longer ignore them, as the previous ATSA methods do.Motivated by this, we focus on Aspect-Category based Sentiment Analysis (ACSA) in this paper, aiming to perform joint aspect category detection and category-oriented sentiment classification. Compared with ATSA, ACSA has the following two advantages: On the one hand, for each aspect mentioned in a review: The food here is rather good, but only if you like to wait for it.aspect term:food category:Food sentiment:positive aspect term:NULL category:Service sentiment:negative review sentence, even if it does not have the corresponding aspect term, there must be a corresponding aspect category so that we can identify user's sentiment over it. On the other hand, from the perspective of application in real scenarios, although ACSA does not extract the aspect terms explicitly, it already meets the demand of opinion summary from multiple aspect level granularity. However, research in this area is relatively rare, and only a few preliminary studies have been carried out. Schmitt et al. (2018) proposed a joint model by extending sentiment labels with one more dimension to indicate the occurrence of each aspect category, which is shown to outperform traditional pipeline methods. Another feasible solution is to perform Cartesian product for aspect categories and sentiment labels, which essentially performs multi-label sentiment classification for each aspect category. Nevertheless, most of these methods fail to explicitly model the hierarchical relationship between aspect category detection and category-oriented sentiment classification. In particular, when there are many aspect categories, it is difficult for these methods to learn the inner-relations among multiple categories and the inter-relations between categories and sentiments.In this paper, we re-formalize the task as a category-sentiment hierarchy prediction problem, which contains a two-layer hierarchy output structure. The lower layer is to detect aspect categories, which can be modelled as a multi-label classification problem (i.e., one review may contain more than one category). The higher layer is to perform category-oriented sentiment classification, which can be modelled as a multi-class classification problem for each detected category.Under the hierarchy output structure, our model contains three modules: the bottom module leverages BERT to obtain hidden representations of the two sub-tasks respectively. In the middle module, we propose a Hierarchical Graph Convolutional Network (Hier-GCN), where the lower-level GCN is to model the inner-relations among multiple categories, and the higher-level GCN is to capture the interrelations between categories and category-oriented sentiments. Based on the interactive representations generated from Hier-GCN, the top module performs category-sentiment hierarchy prediction to generate the final output.We conduct experiments on four benchmark product review datasets from SemEval 2015 and 2016. The results prove that the hierarchy output structure achieves better performance than other existing structures. On this basis, the proposed Hier-GCN architecture can bring additional performance gains, and consistently achieves the best results across the four datasets. Further analysis also proves the effectiveness of Hier-GCN in cases of both explicit aspect and implicit aspect. | 0 |
Sensorial information interpenetrates languages with various semantic roles in different levels since the main interaction instrument of humans with the outside world is the sensory organs. The transformation of the raw sensations that we receive through the sensory organs into our understanding of the world has been an important philosophical topic for centuries. According to a classification that dates back to Aristotle (Johansen, 1997) , senses can be categorized into five modalities, namely, sight, hearing, taste, smell and touch. With the help of perception, we can process the data coming from our sensory receptors and become aware of our environment. While interpreting sensory data, we unconsciously use our existing knowledge and experience about the world to create a private experience (Bernstein, 2010) .Language has a significant role as our main communication device to convert our private experiences to shared representations of the environment that we perceive (Majid and Levinson, 2011) . As a basic example, onomatopoeic words, such as knock or woof, are acquired by direct imitation of the sounds allowing us to share the experience of what we hear. As another example, where an imitation is not possible, is that giving a name to a color, such as blue, provides a tool to describe a visual feature of an object. In addition to the words that describe the direct sensorial features of objects, languages include many other lexical items that are connected to sensory modalities in various semantic roles. For instance, while some words can be used to describe a perception activity (e.g., to sniff, to watch, to feel), others can simply be physical phenomena that can be perceived by sensory receptors (e.g., light, song, salt, smoke) .Common usage of language, either written or spoken, can be very dense in terms of sensorial words. As an example, the sentence "I felt the cold breeze." contains three sensorial words: to feel as a perception activity, cold as a perceived sensorial feature and breeze as a physical phenomenon. The connection to the sense modalities of the words might not be mutually exclusive, that is to say a word can be associated with more than one senses. For instance, the adjective sweet could be associated with both the senses of taste and smell. While we, as humans, have the ability to connect words with senses intuitively by using our commonsense knowledge, it is not straightforward for machines to interpret sensorial information.Making use of a lexicon containing sensorial words could be beneficial for many computational scenarios. Rodriguez-Esteban and Rzhetsky (2008) report that using words related to senses in a text could clarify the meaning of an abstract concept by facilitating a more concrete imagination. To this respect, an existing text could be automatically modified with sensory words for various purposes such as attracting attention or biasing the audience towards a specific concept. Additionally, sensory words can be utilized to affect private psychology by inducing a positive or negative sentiment (Majid and Levinson, 2011) . For instance, de Araujo et al. (2005) show that the pleasantness level of the same odor can be altered by labeling it as body odor or cheddar cheese. As another motivation, the readability and understandability of text could also be enhanced by using sensory words (Rodriguez-Esteban and Rzhetsky, 2008) . A compelling use case of a sensorial lexicon is that automatic text modification to change the density of a specific sense could help people with sensory disabilities. For instance, while teaching a concept to a congenitally blind child, an application that eliminates color-related descriptions would be beneficial. A sensorial lexicon could also be exploited by search engines to personalize the results according to user needs.Advertising is another broad area which would benefit from such a resource especially by using synaesthesia 1 , as it strengthens creative thinking and it is commonly exploited as an imagination boosting tool in advertisement slogans (Pricken, 2008) . As an example, we can consider the slogans "The taste of a paradise" where the sense of sight is combined with the sense of taste or "Hear the big picture" where sight and hearing are merged.Various studies have been conducted both in computational linguistics and cognitive science that build resources associating words with several cognitive features such as abstractnessconcreteness (Coltheart, 1981; Turney et al., 2011) , emotions Mohammad and Turney, 2010) , colors (Özbal et al., 2011; Mohammad, 2011) and imageability (Coltheart, 1981) . However, to the best of our knowledge, there is no attempt in the literature to build a resource that associates words with senses. In this paper, we propose a computational method to automatically generate a sensorial lexicon that associates words in English with senses. Our method consists of two main steps. First, we gen- 1 American Heritage Dictionary (http:// ahdictionary.com/) defines synaesthesia in linguistics as the description of one kind of sense impression by using words that normally describe another. erate a set of seed words for each sense category with the help of a bootstrapping approach. In the second step, we exploit a corpus based probabilistic technique to create the final lexicon. We evaluate this lexicon with the help of a gold standard that we obtain by using the crowdsourcing service of CrowdFlower 2 .The sensorial lexicon, which we named Sensicon, embodies 22,684 English lemmas together with their part-of-speech (POS) information that have been linked to one or more of the five senses. Each entry in this lexicon consists of a lemma-POS pair and a score for each sensory modality that indicates the degree of association. For instance, the verb stink has the highest score for smell as expected while the scores for the other four senses are very low. The noun tree, which is a concrete object and might be perceived by multiple senses, has high scores for sight, touch and smell.The rest of the paper is organized as follows. We first review previous work relevant to this task in Section 2. Then in Section 3, we describe the proposed approach in detail. In Section 4, we explain the annotation process that we conducted and the evaluation strategy that we employed. Finally, in Section 5, we draw our conclusions and outline possible future directions. | 0 |
Predicate-argument structure (PAS) analysis is the task to identify the argument for each case of the target predicate. As it is a fundamental analysis for various natural language processing (NLP) applications, the PAS analysis has been one of the most active research areas in NLP. In discourse-oriented languages like Japanese, the target language of the present study, arguments are often omitted from the sentence Kayama (2003) . Those omitted arguments are considered as zero-pronouns or exophora.(1) meiru-o kaite okuttayo. yondene. mail ACC wrote v1 sent v2 read v3 /imperative I wrote a mail to you and sent it to you. Read it.For instance, example (1) has three predicates (v 1 , v 2 and v 3 ) and one explicit argument (mail). The PAS analysis result of example (1) looks like Table 1 , where the elements enclosed with square brackets are exophoric, the round bracketed is an intra-sentential zero-anaphora and the double round bracketed is an inter-sentential zero-anaphora. The accusative argument of v 1 , "meiru-o (mail)", is explicitly marked by the case marker "o" and has a dependency relation with v 1 , which is indicated by a bare noun, i.e. without any bracket.Although the Japanese PAS analysis is similar to the semantic role labeling (SRL) (Zhou and Xu, 2015; He et al., 2017) , it also involves anaphora resolution for zero-pronouns and exophora to identify the argument for every case of the predicate, which corresponds to filling the bracketed elements in Table 1. We also find omitted arguments in other pro-drop languages such as Chinese, Turkish, and some null-subject languages in the Romance languages (Iida and Poesio, 2011; Rello et al., 2012; Chen and Ng, 2016; Yin et al., 2017) .The past Japanese PAS analysis utilizes various features obtained from the morphological and syntactic analysis (Matsubayashi and Inui, 2017; Hayashibe et al., 2011; Imamura et al., 2014; Shibata et al., 2016; Ouchi et al., 2015; Yoshikawa et al., 2013; Taira et al., 2008) . The recent approach includes the end-to-end approach that does not require any intermediate analysis (Ouchi et al., 2017) .The contribution of the present paper to the Japanese PAS analysis is twofold. Firstly we subcategorize the exophora into fine-grained classes, namely, the exophoric text writer (exo1), reader (exo2) and the other entity (exoX). Example (2) depicts the necessity of the subcategorization.(2) sandoitti taberu.sandwich eat I eat sandwich. / Do you eat sandwich?Both the exophoric speaker (exo1) and hearer (exo2) can be the nominative argument of the verb "eat" and accordingly the sentence meaning is different.To distinguish these two meanings, the subcategorization of the exophora is necessary. Secondly, we introduce domain-adaptation techniques into the Japanese PAS analysis. Surdeanu et al. (2008) and Hajič et al. (2009) reported that the SRL performance degraded when the domains were different between the training and testing data. Yang et al. (2015) tackled this problem by introducing the domain adaptation into a deep learning method. As most of the past studies of the Japanese PAS analysis targeted a mono-type of texts, i.e. newspaper articles, the domain adaptation did not matter, except for Imamura et al. (2014) . They trained the PAS analyzer for dialogues by using newspaper articles. However, pairs of other media types have not been investigated yet. In contrast, we target various types of Japanese texts; we use Balanced Corpus of Contemporary Written Japanese (BCCWJ) 1 (Maekawa et al., 2014) for evaluation. BCCWJ contains 100 million words that were systematically collected from several source media such as newspaper articles (PN), books (PB), magazines (PM), white papers (OW), QA texts in the Internet (OC) and blog texts (OY). We use the core data set of BCCWJ consisting of about two million words annotated with co-reference and predicate-argument relations for nominative, dative and accusative cases. As we describe in the next section, the distribution of exophoric arguments is notably different over the source media; thus consideration of the difference in the source media is necessary.We start with a recurrent neural network (RNN)based base model and extend it by introducing the 1 http://pj.ninjal.ac.jp/corpus_center/ bccwj/en/ following five kinds of domain adaptation. (1) The fine-tuning method trains the model with the entire training data and uses the learnt parameters as the initial parameter values for the second stage learning with the target-domain training data. (2) The feature augmentation method trains a shared network and domain-specific networks simultaneously (Kim et al., 2016) . 3The class probability shift method skews the output probability of the network based on the prior probability distribution of the argument types across the domains. (4) The voting method determines the output by the majority of the above three methods. 5The mixture method combines the fine-tuning method, the feature augmentation method and the class probability shift method into a single method. We describe the details of each method in section 4. | 0 |
Named Entity Recognition (NER) is a key component of Natural Language Processing (NLP) assigned to identify regions of text that contain references to entities. It is the process of identifying the informative part of data or applicable labels from unstructured data. In NER, data is gathered from unstructured data such as emails, blogs, newspapers, tweets, etc., to extract meaningful information.To put it another way, the term NER refers to identifying token spans of entities mentioned in the text and classifying them into a set of predetermined categories. The system finds entities from unstructured data and organizes them into multiple categories. As an extension of NLP, the field of NER can be considered as Information Extraction (IE).For NLP, IEs are among the trending fields and play an essential role in the following tasks: * Equal contribution. Listing order is random.Find and understand limited relevant parts of texts, Gather information from many pieces of text, and Produce the unified representation of all the relevant information. The NER problem falls into a general class of NLP problems called sequence tagging. Part of Speech (POS) tagging and chunking are sequence tagging NLP tasks in addition to NER. It is possible to detect NER in three ways: flat NER, nested NER, and discontinuous NER. Nested NER has overlapping in the span of text Yan et al. (2021) . Most approaches only target flat entities, ignoring nested structures common in many scenarios. It is challenging to identify spans as well as types of named entities in text Lample et al. (2016) . So to find the best architecture, we implement a transformer-based language model in this research. NLP has significantly benefited from transfer learning in recent years. Transfer learning gains power and effectiveness from pre-training on large, unlabeled text datasets. The model can then be fine-tuned to a smaller labeled dataset, resulting in better performance. Many models have achieved success in this field, including the Text-To-Text Transfer Transformer (T5) Raffel et al. (2019) . The T5 is a pre-trained encoder-decoder language model that employs the "text-to-text" format to accomplish all types of NLP work, including generation, translation, and summarization tasks. In SemEval-2022 task 11 Malmasi et al. (2022b) , a multilingual complex NER is provided. Languages presented in Malmasi et al. (2022a) are Bangla, German, English, Spanish, Farsi, Hindi, Korean, Dutch, Russian, Turkish, and Chinese. Since this dataset contains words from different languages, it is challenging to choose an appropriate word representation for converting them into their corresponding vectors. The multilingual nature of this task necessitated the selection of the multilingual variant of Google's T5 model named MT5 Xue et al. (2020) that had already been trained on a database of more than 101 languages and con-tained up to 13 billion parameters. In this paper, MT5 is used as the main Embedding.We evaluated the proposed model using the English test set and achieved the F1-score of 71.45% as part of this competition. In our next step, we considered this model in other subtasks, and our rank varied from 9 to 21 depending on which subtask we evaluated. Our code is available at GitHub 1 for researchers.The contributions of this paper are summarized as follows. Section 2 introduces previous attempts in the NER. In Section 3, information about the task and datasets is presented. We then offer a deep learning framework for recognizing named entities in Section 4. Section 5 details the experimental setup, while Section 6 presents the results of the experiments. Section 7 presents both quantitative and qualitative error analysis. We conclude our paper in Section 8. | 0 |
Bilingual Dictionary Induction is the task of inducing word translations from monolingual corpora in different languages. It has been studied extensively as it is one of the main tasks used for evaluating the quality of BWE models (Mikolov et al., 2013b; Vulic and Korhonen, 2016) . It is also important for downstream tasks such as translating out-of-vocabulary words in MT (Huck et al., 2019) . Although there is a large amount of work for BDI, there is no standard way to measure the performance of the systems, the published results are not comparable and the pros and cons of the various approaches are not clear. The aim of the BUCC 2020 -Bilingual Dictionary Induction from Comparable Corpora -shared task (Rapp et al., 2020) is to solve this problem and compare various systems on a standard test set. It involves multiple language pairs including Chinese, English, French, German, Russian and Spanish and supports comparable monolingual corpora, and training and testing dictionaries for high, middle and low frequency words. In this paper, we present our approach to the shared task and show results on English-German and English-Russian. BWEs are popular for solving BDI by calculating cosine similarity of word pairs and taking the n most similar candidates as translations for a given source word. They were shown to be very effective for the task using a small seed lexicon only (e.g., (Mikolov et al., 2013b) ) as opposed to MT based approaches where parallel data is necessary. In addition, Conneau et al. (2018) and Artetxe et al. (2018) were able to learn BWEs without any seed dictionaries using a self-learning method that starts from an initial weak solution and improves the mapping iteratively. Due to this, BDI is one of the building blocks of unsupervised MT and are particularly relevant in low-resource settings (Artetxe et * The authors contributed equally to this manuscript. Lample et al., 2018) . Although BWE based methods work well for translating high frequency words, it was shown that they tend to have low performance when translating low-frequency words or named entities due to poor vector representation of such words (Braune et al., 2018; Riley and Gildea, 2018; Czarnowska et al., 2019) . By using character n-gram representations and Levenshtein similarity of words, Braune et al. (2018) showed improved results on rare and domain specific words. Similarly, Riley and Gildea (2018) improves the translation of such words by integrating orthographic information into the vector representation of words and in the mapping procedure of BWEs. On the other hand, these techniques are only applicable in the case of language pairs having the same scripts. Recently, Riley and Gildea (2020) proposed an unsupervised system based on expectation maximization and character-level RNN models to learn transliteration based similarity, i.e., edit distance similarity across different character sets. To train their system they took 5, 000 word pairs having the highest cosine similarity based on BWEs. However, this method could be noisy, since nontransliteration pairs could be generated as well. In this paper, we present our approach to BDI focusing on the problems of low frequency words translation. We follow the approach of Braune et al. (2018) and improve low frequency translation by combining a BWE based model with other information coming from word surface similarity: orthography and transliteration. The orthographic model is used in the case of word pairs with shared alphabet and uses the Levenshtein similarity. The transliteration model is used for pairs with different scripts where an orthographic comparison would not be possible and it is obtained from our novel fully unsupervised transliteration model. In contrast to (Riley and Gildea, 2020) , we propose a cleaning method for filtering non-transliteration pairs from the used dictionary before training the model to ensure a less noisy training signal. We test our system on the English-German pairs (En-De, De-En) and English-Russian pairs (En-Ru, Ru-En) provided in the BUCC 2020 Shared Task (Rapp et al., 2020) . We participate in both the open and closed tracks of the shared tasks, using embeddings extracted either from Wikipedia (Conneau et al., 2018) or WaCKy (Baroni et al., 2009) respectively. In addition to using a static number of most similar words as translation, we experimented with methods returning a dynamic number of translations given each source word. In the rest of the paper, we first describe the approach and how we obtain the two word surface similarity scores. Then, we present the experiments on the BUCC 2020 dataset and discuss the results. | 0 |
Linear context-free rewriting systems (LCFRS) (Vijay-Shanker et al., 1987) , the equivalent multiple context-free grammars (MCFG) (Seki et al., 1991) and simple range concatenation grammars (sRCG) (Boullier, 1998) have recently attracted an increasing interest in the context of natural language processing. For example, Maier and Søgaard (2008) propose to extract simple RCGs from constituency treebanks with crossing branches while Kuhlmann and Satta (2009) propose to extract LCFRS from non-projective dependency treebanks. Another application area of this class of formalisms is biological computing (Kato et al., 2006) .This paper addresses the symbolic parsing of sRCG/LCFRS. Starting from the parsing algorithms presented in Burden and Ljunglöf (2005) and Villemonte de la Clergerie (2002), we propose an incremental Earley algorithm for simple RCG. The strategy is roughly like the one pursued in Villemonte de la Clergerie (2002) . However, instead of the automaton-based formalization in Villemonte de la Clergerie's work, we give a general formulation of an incremental Earley algorithm, using the framework of parsing as deduction. In order to reduce the search space, we introduce different types of filters on our items. We have implemented this algorithm and tested it on simple RCGs extracted from the German treebanks Negra and Tiger.In the following section, we introduce simple RCG and in section 3, we present an algorithm for symbolic parsing of simple RCG. Section 4 then presents different filtering techniques to reduce the number of items. We close discussing future work. | 0 |
« Qui dit quoi sur qui (quoi) et de quelle manière ? » L'intérêt pour cette question, et plus largement les informations de type attribution, n'est pas nouveau et l'on constate même un certain regain depuis quelques années avec les travaux cherchant à mesurer la subjectivité (opinion, sentiment, émotion) engagée dans l'attribution.Cet intérêt se retrouve dans plusieurs disciplines (linguistique, sociologie, traitement automatique des langues, professions de la documentation. . .). Pour certains il s'agit de mesurer l'impact du travail d'un chercheur (mesures bibliométriques) (Hirsch, 2005) (Ritchie et al., 2006) , de question-réponse (« Que pense X au sujet de Y ? ») (Somasundaran et al., 2007) . Finalement pour d'autres encore il s'agit de collecter des opinions sur différents sujets (Wilson et al., 2005) , potentiellement à des fins commerciales (suivi de l'impact d'un produit) (Dave et al., 2003) . Ces travaux couvrent plusieurs tâches : du repérage de la présence d'attribution à l'identification de l'étendue de ses constituants en passant par la mesure de sa fonction rhétorique, de sa subjectivité, de sa polarité et de son contexte. Néanmoins, cette couverture est inégale selon la langue, le genre de texte et la motivation applicative.Le contexte applicatif du présent article est celui de la détection de plagiats et du suivi d'impact à partir d'un écrit original, dans des textes journalistiques francophones. Ce travail s'inscrit dans le cadre du projet ANR PIITHIE 1 (Plagiats et Impacts de l'Information Textuelle recHerchée dans un contexte InterlinguE). Dans cette perspective, le repérage de citations et de ses constituants (l'identification de la source et son propos) est primordial puisqu'il peut permettre l'évaluation du caractère licite ou illicite d'une reprise. En effet, la détection de plagiat se fait notamment par identification de similarités entre deux textes. La présence commune de citations a tendance à augmenter artificiellement la similarité entre deux textes qui ne sont pas plagiat l'un de l'autre. À l'inverse, en faisant l'hypothèse que les citations ne sont pas modifiées dans un texte plagié, les citations constituent de bons points de repères pour identifier de potentiels plagiats en analysant le texte environnant et seulement celui-ci.Dans cet article, nous proposons des approches pour le repérage des citations, leur classification en discours direct (DD) ou discours indirect (DI), ainsi que pour l'identification des entités nommées dont on rapporte un discours, et les segments textuels entre guillemets porteurs de DD. L'intérêt pour les DD s'explique par le fait que l'identification d'une reprise verbatim facilite le suivi d'impact. À ces fins, nous avons construit et annoté un corpus d'écrits journalistiques.Notre problématique diffère de celle qui a pu être traitée dans les travaux menés en anglais sur des textes scientifiques (Teufel et al., 2006) où les sources sont clairement identifiées (l'auteur ou les références bibliographiques) et les formes de reprise différentes, puisque pour ce genre les citations constituent un positionnement de l'auteur écrivain qui nourrit son discours, alors que pour les écrits journalistiques, le discours rapporté constitue l'essence même de l'article.Nos considérations seraient davantage à rapprocher des travaux en reconnaissance d'attribution et de capture de subjectivité réalisés sur les journaux et les blogs (Bethard et al., 2004; Choi et al., 2005; Stoyanov & Cardie, 2006; Somasundaran et al., 2007) . Nous nous différencions néanmoins sur plusieurs points. Outre le fait que ces travaux ont été dirigés sur de l'anglais, ils reposent avant tout sur des ressources lexicales importantes (pour mesurer la subjectivité), syntaxiques (arbre de dépendance ou fonction grammaticale) et sémantiques pour les travaux ayant cherché à identifier les syntagmes sources et les propositions porteuses d'un discours repris. En comparaison nos approches visent à exploiter essentiellement des marques de surfaces de nature typographique, morphologique et positionnelle. En cela, nous suivons l'approche décrite par (Giguet & Lucas, 2004) . Ces marques sont néanmoins complétées par une ressource lexicale de verbes de parole produite par (Mourad & Desclés, 2004) . À notre connaissance, (Mourad & Desclés, 2004) et (Giguet & Lucas, 2004) sont les seuls travaux réalisés en français sur le repérage de citations. Ces deux techniques reposent sur des règles. Celle de (Mourad & Desclés, 2004) se fonde sur l'exploration contextuelle qui distingue les marques ayant un rôle d'embrayeur de celles jouant le rôle de confirmateur. Le modèle de (Giguet & Lucas, 2004) s'appuie dans un premier temps sur la co-présence de marques pour reconnaître les constituants source, relateur ou discours, puis sur la reconnaissance du motif SRD (source + relateur + discours rapporté) ou de son inverse, DRS, pour repérer une citation. Par la suite, nous nous intéressons à l'expression linguistique référant à la source que nous appelons « expression locuteur ». Nous étendons ce modèle en ne posant pas de contraintes sur les motifs possibles et en considérant diverses techniques de reconnaissance notamment par apprentissage supervisé. Par ailleurs nous apportons une évaluation quantitative de nos méthodes.Les problèmes du repérage sont complexes puisqu'une citation peut se caractériser par différents types d'information. Dans sa typologie, (Jackiewicz, 2006) , décrit les citations en fonction du type d'information accompagnant le discours repris (paramètres de la situation d'énonciation (par qui, quand, où et pour qui), intentions du locuteur d'origine, informations à l'attention de celui à qui le discours est rapporté). (Giguet & Lucas, 2004) Outre ces différents types d'informations, il existe plusieurs manières 2 de restituer un discours en provenance d'une situation d'énonciation initiale. En effet, les reprises de discours peuvent prendre différentes formes d'intégration, du style direct (reprise littérale traditionnellement marquée par des guillemets (cf. exemples 1 et 2) au style indirect (discours intégré dans l'énonciation du reprenant (cf. exemples 3) en passant par des formes intermédiaires avec des îlots de reprises verbatims (que nous appelons style « à composantes ») (cf. exemple 4). De surcroît, outre le fait qu'il puisse être difficile de délimiter un discours repris, celui-ci peut être fragmenté (cf. exemple 2) voire étalé sur plusieurs phrases. À cela on peut encore ajouter le fait que les informations caractérisant une citation peuvent prendre différentes formes morpho-syntaxiques : e.g. relateur verbal ou prépositionnel (cf. exemples 1 et 3), source marquée par un pronom ou une entité nommée (cf. exemples 2 et 4), se combiner selon différentes configurations syntaxiques (cf. exemples 1 et 2), ou se retrouver distribuées dans une fenêtre transphrastique.(1) Le quotidien économique souligne : "Si le rapport ne veut pas associer ces montants à l'idée d'une nouvelle 'cagnotte' budgétaire, ni au débat électoral sur le niveau de prélèvements obligatoires, le montant est équivalent au déficit budgétaire de l'État, à savoir 36,5 milliards d'euros l'an dernier." (2) "En 2003 , explique-t-il, j'ai fait effectuer douze tests sur des vols en France, dans onze cas sur douze, des armes et explosifs ont pu être introduits dans les avions".(3) D'après sa mère, Julien faisait une sieste dans l'appartement familial lorsqu'il a disparu. (4) Le Figaro estime, lui, que "les techniciens et les cadres sont en première ligne", notamment ceux de la "Central Entity" de Toulouse. | 0 |
Information Extraction (IE) is the task of identifying information in texts and converting it into a predefined format. The possible types of information include entities, relations or events. In this paper, we follow the IE tasks as defined by the conferences MUC4, MUC6 and ACE RDC: slotbased extraction, template filling and relation extraction, respectively.Previous approaches to IE relied on cooccurrence (Xiao et al., 2004) and dependency relations between entities. These relations enable us to make reliable extraction of correct entities/relations at the level of a single clause. However, Maslennikov et al. (2006) reported that the increase of relation path length will lead to considerable decrease in performance. In most cases, this decrease in performance occurs because entities may belong to different clauses. Since clauses in a sentence are connected by clausal relations (Halliday and Hasan, 1976) , it is thus important to perform discourse analysis of a sentence.Discourse analysis may contribute to IE in several ways. First, Taboada and Mann (2005) reported that discourse analysis helps to decompose long sentences into clauses. Therefore, it helps to distinguish relevant clauses from non-relevant ones. Second, Miltsakaki (2003) stated that entities in subordinate clauses are less salient. Third, the knowledge of textual structure helps to interpret the meaning of entities in a text (Grosz and Sidner 1986) . As an example, consider the sentences "ABC Co. appointed a new chairman. Additionally, the current CEO was retired". The word 'additionally' connects the event in the second sentence to the entity 'ABC Co.' in the first sentence. Fourth, Moens and De Busser (2002) reported that discourse segments tend to be in a fixed order for structured texts such as court decisions or news. Hence, analysis of discourse order may reduce the variability of possible relations between entities.To model these factors, we propose a multiresolution framework ARE that integrates both discourse and dependency relations at 2 levels. ARE aims to filter noisy dependency relations from training and support their evaluation with discourse relations between entities. Additionally, we encode semantic roles of entities in order to utilize semantic relations. Evaluations on MUC4, MUC6 and ACE RDC 2003 corpora demonstrates that our approach outperforms the state-of-art systems mainly due to modeling of discourse relations.The contribution of this paper is in applying discourse relations to supplement dependency relations in a multi-resolution framework for IE. The framework enables us to connect entities in different clauses and thus improve the performance on long-distance dependency paths.Section 2 describes related work, while Section 3 presents our proposed framework, including the extraction of anchor cues and various types of relations, integration of extracted relations, and complexity classification. Section 4 describes our experimental results, with the analysis of results in Section 5. Section 6 concludes the paper. | 0 |
The rhetorical relations that hold between clauses in discourse index temporal and event information and contribute to a discourse's pragmatic coherence (Hobbs, 1985) . For example, in (1) the NARRATION relation holds between (1a) and (1b) as (1b) temporally follows (1a) at event time.(1) a. Pascale closed the toy chest.b. She walked to the gate. c. The gate was locked securely. d. So she couldn't get into the kitchen.The ELABORATION relation, describing the surrounding state of affairs, holds between (1b) and (1c). (1c) is temporally inclusive (subordinated) with (1b) and there is no temporal progression at event time. The RESULT relation holds between (1bc) and (1d). (1d) follows (1b) and its subordinated ELABORATION relation (1c) at event time.Additional pragmatic information is encoded in these relations in terms of granularity. Granularity refers to the relative increases or decreases in the level of described detail. For example, moving from (1b) to (1c), we learn more information about the gate via the ELABORATION relation. Also, moving from (1b-c) to (1d) there is a consolidation of information associated with the RESULT relation.Through several supervised machine learning tasks, we investigate the degree to which granularity (as well as additional elements of discourse structure (e.g. tense, aspect, event)) serves as a viable organization and predictor of rhetorical relations in a range of written discourses. This paper is organized as follows. Section 2 reviews prior research on rhetorical relations, discourse structure, granularity and prediction. Section 3 discusses the analyzed data, the selection and annotation of features, and the construction of several machine learning tasks. Section 4 provides the results which are then discussed in Section 5. | 0 |
Nowadays emoji are widespread throughout mobile and web communication both in private conversations and public contexts such as blog entries or comments. In 2015, the Oxford Dictionary declared the emoji Face with tears of joy "Word of the year", and since then the academic interest towards the topic, as well as the development of relevant resources, have grown substantially. Emoji are best known to be markers for emotions, and in this sense they can be considered an evolution of emoticons. However, these pictographs can be used to represent a much wider range of concepts than emoticons, including objects, ideas and actions in addition to emotions, and thus they interact with the content expressed in the surrounding text in more complex ways. Furthermore, emoji are used not only at the end of a message, e.g. a tweet, but can occur anywhere and possibly in sequences. Therefore, understanding the seman-tic relation they have with the surrounding text, in particular whether emoji add independent meaning, is an important step in any approach attempting to process their contribution to the overall content of a given message, both for the purposes of sentiment analysis and natural language processing.We are interested in investigating to what extent it is possible for a human annotator, and subsequently for an automatic classifier, to determine if emoji in tweets are used to emphasize or add information, which may well be emotional information, but could also have a different semantic flavour. If emoji do add meaning, we also ask how easy it is to understand if they are being used as syntactic substitutes for words. In this paper, we focus on the corpus of English tweets that was collected and annotated to provide training data for a number of classifiers aiming at predicting whether emoji in microblogs are used in a redundant or a non-redundant way.The classification experiments achieved promising results (F-score of 0.7) for the best performing model, which combined LSA with handcrafted features and employed a linear SVM in a One vs. All fashion. The process and results of the experiments will be described in a future paper (in preparation).In Section (2) we review related research, then in Section (3) we describe how the tweets were extracted and collected to create the corpus, and give counts of the various represented categories. In Section (4) the annotation process is described, Section (5) presents and discusses the results, and finally in Section (6) we provide a conclusion.• "We'll always have Beer. I'll see to it. I got your back on that one. "• "@USER I need u in Paris girls " 2. Non-Redundant• "I wish you were here "• "Hopin for the best "3. Non-Redundant + POS• "Thank you so so so so much ily Here's a as a thank you gift x"• "Good morning "An edge case could be represented by:• "Reading is always a good idea . Thank you for your sincere support @USER. Happy reading."In this case the emoji represents books which are related to the verb "reading", however the act of reading does not necessarily imply the presence of books (it is not an entailment) since it is possible to read newspapers, blogs, comments, emails; the emoji is narrowing down the meaning of the verb, therefore it is adding information and we should consider it non-redundant.Emotions also represent a challenge since we need to rely on symbols or simplifications to depict complex expressions. While a case such as:• "i'm so proud of myself *pats my back*"is clearly non-redundant (here the emoji is used ironically), a tweet like:• "My forever love @URL" represents redundant use. | 0 |
As building rich semantic representations of text becomes more feasible, it is important to develop standard representations of logical form that can be used to share data and compare approaches. In this paper, we describe some general characteristics that such a logical form language should have, then present a graphical representation derived from the LF used in the TRIPS system (Allen et al., 2007) .The Logical Form is a representation that serves as the interface between structural analysis of text (i.e., parsing) and the subsequent use of the information to produce knowledge, whether it be for learning by reading, question answering, or dialoguebased interactive systems.It's important to distinguish two separable problems, namely the ontology used and the structure of the logical form language (LFL). The ontology determines the set of word senses and semantic relations that can be used. The LFL determines how these elements can be structured to capture the meaning of sentences. We are addressing the latter in the paper. Consider some principles for designing useful LFs.The LFL should allow one to express the dependencies and subtleties that are expressed in the sentence. On the simple end, this means the LFL should allow us to represent the differences between the NP The pigeon house, which is a type of house, and the house pigeon, which is a type of pigeon. On the more complicated end, the LFL should be able to capture complex quantifier structures such as those in the NPs Nearly all peaches, or Every dog but one, and phenomena such as modal operators, predicate modifiers, and explicit sets.One might argue that capturing such complex phenomena in the LFL is premature at this time, as existing techniques are unlikely to be able to produce them reliably. On the other hand, if we don't allow such subtleties in the gold-standard LFL, we will tend to stifle long-term work on the difficult problems since it is not reflected in the score in evaluations.This issue has a long history in the literature, with the most classic case being quantifier scoping. Underspecified representations of quantifier scoping are a prime focus in the development of modern logical form languages such as MRS (Copestake et al., 2006) , and work goes all the way back to early natural language systems (e.g. Woods, 1978) . Other techniques for compactly encoding ambiguity include prepositional phrase attachment, and most critically, the use of vague predicates and relations. For example, for many cases of noun-noun modification, the exact semantic relation between the nouns cannot be determined, and actually need not be determined precisely to be understood.In many cases, because of limitations in current processing, or because of the fragmentary nature of the language input itself, a system will only be able to construct partial interpretations. The LFL should be constructed in a way such that partial representations are easily compared with full representations. In particular, the interpretation of a fragment should be a subset of the full logical form of the entire sentence. It is a fortunate circumstance that representations that tend to compactly encode ambiguity tend also to have this subset property. 2 Overview of LF Graphs An example LF-graph is shown in Figure 1 . This graph introduces much of the formalism. Each node represents either a speechact, a proposition, a generalized quantifier, an operator or a kind. Nodes are labelled in three parts, the specifier, indicating the semantic function of node, the type, indicating conceptual class drawn from the ontology, and the word from the input. The latter allows us to relate the nodes in the LF graph back to the input. The edges are labelled with semantic roles that indicate argument structure and other critical properties such as modification relationships.Consider each of the core node types. The first term type captures the meanings of fragments that define eventualities (i.e., events and properties). For instance, the node (F FOLLOW chase) in Figure 1 refers to an eventuality of the type FOLLOW (which would be defined in the ontology). Additional information about the eventuality is captured by the outgoing edges, which identify two arguments, the :Agent and the :Theme, and one other that provides the tense information for later contextual interpretation (PRES is the present tense).The second node type captures generalized quantifier constructions. The node (THE ANIMAL cat) indicates a definite description referring to an object of type ANIMAL in the ontology. Generalized quantifiers that have universal import are indicated as shown in the node (QUANTIFIER ANIMAL dog), where an edge labelled :QUAN gives the specific quantifier involved. Note also the presence of a modification to the type (the :MOD) arc, which points to another eventuality, namely (F LIVING-PROPERTY-VAL hungry), which in turn has an argument (:OF) pointing back to the modified node. The :MOD link is critical for capturing dependencies that allow us to reconstruct the full logical form from the graph. For instance, it allows us to retain the distinction between head noun and the modifiers (e.g., the pigeon house vs the house pigeon). Table 1 shows the core set of generalized quantifiers used in TRIPS (and subsequently interpreted in discourse processing, especially reference resolution. A large set of quantifiers that indicate the size (e.g., many, some, five, at most three, a few, ...)are treated as an indefinite construction with a (often vague) size modifier. (we expect to be able to resolve it from context) A an indefinite form (we expect it to introduce new objects) PRO a pronoun form (we expect it to be resolved from local context) IMPRO an implicit anaphoric form BARE forms with no specifier and ambiguous between generic, kind, and indefinite QUANTIFIER "universally" quantified constructions (e.g., EVERY) QUANTITY-TERM a quantity expressed in units (e.g., three pounds) WH-TERM "wh" terms as in questions (e.g., which trucks) KIND the definition of a kind (aka lambda abstraction)The next term type specifies modal operators, and seen in Figure 1 as the node (OP FREQUENCY usually). The operator nodes must be distinguished from the terms for predications (F) to support algorithms for quantifier and operator scoping.The final class of node in Figure 1 is the speech act performed by an utterance: (SPEECHACT TELL). This has no third argument as it does not arise from any single word in the utterance. The semantic role :content indicates the propositional content of the speech act, and additional roles indicating the speaker and hearer are suppressed. Speech acts have modifiers in order to handle phenomena such as discourse adverbials. Figure 2 shows another LF graph which captures some additional key constructions. It shows another speech act, for Wh-questions, and shows the handling of plurals. LF graphs distinguish explicitly between singular and plurals by modeling sets, in which an :of argument that points to the type of objects in the set.The KIND operator is used to define these types (aka lambda abstraction). Thus the three small engines is a SET of size three with elements of KIND ENGINE and which are small.LF-graphs are interesting as they offer the possibility of comparing the semantic content of different approaches, ranging from shallow approaches that identify word senses and semantic roles, to complex representations produced by state-of-the-art deep parsers. On the shallow side, a word sense disambiguation system would produce a set of nodes with the word senses labeled from an ontology, but not indicating a specifier, and not capturing any semantic roles. A system that identifies semantic roles can capture its results using the edges of the graph.On the other hand, we can show that the LF-graph formalism is equivalent to the TRIPS logical form language (LFL), which is a "flat" scope-underspecified representation of a reference modal logic with generalized quantifiers and lambda abstraction.We have developed an efficient quantifier scoping algorithm on this LFL that constructs possible fully-scoped forms in the reference logic, and we can prove that we derive the same sets of possible interpretations as the representations constructed by MRS (Manshadi et al., 2008) . Figure 3 shows the TRIPS logical form that produced The final information encoded in the LF graphs is coreference information. Referential expressions are connected to their antecedents using a :coref arc. Note this can only encode referential relations to antecedents that actually appear previously in the text. Simple forms of bridging reference can also be encoded using the insertion of IMPRO nodes that stand in for implicit arguments, and may then co-refer with terms in the graph. | 0 |
Predicting the quality of scientific articles is a novel task in the field of deep learning. There are many indicators of quality such as whether a paper was accepted or rejected, meta-information such as the author's h-index(es), and the number of citations. The number of citations, while not a perfect indicator of quality, is available for any paper which makes it suitable for constructing a large dataset. In this work we propose ACL-BiblioMetry, a new dataset consisting of 30000 papers with citation information. We also test several state-of-the deep learning models and propose a new model called SChuBERT which outperforms all other methods.Using the full text of scholarly documents has the potential to substantially improve the performance of the citation count prediction task. But prohibitive memory costs of applying advanced deep learning models on the full text can be a roadblock. In particular, BERT (Devlin et al., 2018) and its variants have been very successful as building blocks for state-of-the-art natural language processing models for many tasks. Citation count prediction for scholarly documents is a task where BERT has clear potential as well. However, scholarly documents are particularly long texts in general. Since BERT has a time complexity that is quadratic with respect to the input length, it is limited to 512 tokens by default, a limit which can not be increased by much without causing prohibitive computational cost.Recent models including the Reformer (Kitaev et al., 2020) and Longformer (Beltagy et al., 2020) have sought to overcome the quadratic computational cost of the Transformer model (Vaswani et al., 2017) underlying BERT. While these models are very promising, they do not offer the unsupervised pre-training on large amounts of data that makes BERT so powerful as of yet. Although in principle these models could be applied as a dropin replacement for BERT, it requires more research to show if and how unsupervised pre-training as done in BERT can be made to work well with very long context. For these reasons, in this work we use BERT as our base building block and find effective ways to overcome its input length limit, leaving experimentation with the aforementioned models for future research.For dealing with large amounts of training examples containing very long input text we need an approach that: 1) Is able to fit the encoding of the long text into memory, 2) can efficiently process the large amount of training examples when training over many epochs. Both requirements can be fulfilled by chunking the long input text of our examples into parts, and pre-computing BERT embeddings for each of these parts using a pre-trained BERT model. The core of the final model is a sequence-model, in particular a gated recurrent unit (GRU) (Cho et al., 2014) , which directly uses the pre-computed chunk embeddings as inputs. This approach simultaneously overcomes the memory problems associated with dealing with very long input texts, as well as achieves high computational efficiency by performing the expensive step of computing BERT embeddings for chunks only once.While the task of citation count prediction using the contents of a scholarly document is not new, and goes back at least to the work of Fu and Aliferis (2008) , work up until now has been limited in: a) the size of the training data, b) the size of the input text. Table 1 gives an overview of data used in earlier work, note that most are restricted by using only the title + abstract as well as a small number of examples, while (Maillette de Buy Wenniger et al., 2020) substantially increase the number of examples but still use only a limited part of body text available from S2ORC (Lo et al., 2019) . In this work, we show that both these factors have a large influence on the accuracy of models predicting citation counts. Essentially, state-of-the-art methods cannot be adequately evaluated with too small training data. Therefore, apart from providing stateof-the art results for citation-count-prediction on a data set currently unmatched in terms of number of examples with full length input text, we also provide the code for other researchers to rebuild our dataset and the methodology of citation count prediction using the semantic scholar database to label new collections of scholarly documents.The rest of the paper is organized as follows: in section 2 we discuss related work, in section 3 we describe the models used for citation count prediction, in section 4 we discuss the dataset construction, in section 5 we present our experiments, in section 6 we show our results and in section 7 we end with conclusions. | 0 |
In other work, I have shown (EUison 1992, forthcoming) that interesting phonological constraints can be learned despite the presence of exceptions. Each of these constraints imposes a limit the set of possible words at a common level of repre~sentation. In this paper, I consider possible limits to the usefulness of these constraints in representing morphemes and finding concise representations of lexical entries.In order to compare a strictly declarative formalism with other constraint formalisms, a common formal environment must be established. Using model theory to establish the relationship between description and object, and then a modal formalism to define the structures to which constraints apply, we can compare the different effects of strict constraints and defaults. In particular, a strict declarative approach can be compared with other constraint frameworks such as Underspecification Theory (UT) (Archangeli, 1984) and ()ptimality Theory (OT) (Prince & Smolensky, 1993 ). This discussion is followed in tim latter part of the pal)or by consideration of the possibility of using machine learning to constraint systems that use defaults.To structure the disct~ssion, I offer four desiderata for morphophonology. The first is that the morphophonology must allow concise lexical representations. Where information is predictable, it should not have to be specified in the lexicon. This desideratum is not a matter of empirical accuracy, rather one of scientific aesthetics. For example, English contains no front rounded vowels, so a vowel which is marked as front in the lexicon need not be marked as unrounded.The second desideratum is that the morphophonology should allow generalisations to be made over phonologically conditioned aUomorphs. For example, a representation of the Turkish plural affixes -lar, -ler, that uses the feature [:t:front] is superior to a segmental representation because a single representation for the two allomorphs can be achieved by not specifying the value for this feature in the representation of the morph.The third desideratum requires ttlat the specific allomorphs be recoverable from the generalisations. If-lar and -ler are generalised in a single representation, such as -IAr, then the morphophonology should make the recovery of the allomorphs in the correct environments possible.The final desideratum is, like the first, a matter of scientific aesthetics: a priori abstractions should not be used in an analysis any more than is necessary. For example, the feature [:t:front] should not be used in the analysis of a language unless it is motivated by structures in the language itself. This desideratum may conflict with the first: a priori features may result in a more concise representation.These four desiderata provide a framework for evaluating the relative merits of monostratal systems of phonological constraints with other current theories such as Underspecification Theory and Optimality Theory.A fundamental distinction in any formal account is the distinction between description and object. Failure to make the distinction (:an lead, at best, to confusion, and, at worst, to paradoxes, such as Russell's Paradox. Because this theory is talking about theories, it must make the distinction explicitly by formalising the relationship between description and object. This distinction is pursued in below and developed into a formalism for complex structures in the following section.In model theory, the meaning of a statement in a formal l~mguage is provided by means of an INTERPRETATION FUNCTION which maps the statement onto the set of (Jbje(:ts for which the statement is true. If L is a language and W is a set of .t)jects, and P(W) is the set of all snl)sets of W, then the interpretation function I ma.ps L onto P(W):I : L ~ ~(W).As an example, suppose & is a horse, ~ is a ferret and q) is a large stone, and that these are the objects in our world. We might define a language L0 containing the terms big, animate, slow and human, and assign these terms the interpretations in (1).(1) Term T Interpretation I0 (T) big {a, V}animate {$, ~ } slow { ~ , V} human {}This language can be expanded to include the logical operations of conjunction, disjunction and negation. These are provided a semantics by combining the semantics of the terms they apply to.(2) Term Interpretationl • io Io(l) X A Y I(X) N I(Y) X VY I(X) UI(Y) -~x w \ i(x)With this interpretation function, we can determine that big A animate A slow is a CONTRADICTION having a null interpretation in W, while big V slow is a TAUTO-LOGY as I(big V slow) is the same as I(big) U I(slow) which equals W. The term PREDICATE will be used to describe a statement in a language which has a well-defined interpretation.Model theory as defined in section applies only to domains with atomic, unstructured objects. More complex structures can be captured by extending the theory of models to refer to different worlds and the relationships between them. Such a complex of worlds and relations is called a MODAL logic.A modal theory consists of a universe U is a set of worlds Wj,jew, called TYPES, together with a set of relations Rk,kET¢ : Wdom(j) ~ Wcod(k ) from one world to another. Types may contain other types, and whenever a type is so contained, it defines a characteristic relation which selects elements of that subtype from the larger type. A language for this universe is more complex as well, needing a function w : L ---+ I to indicate the type W~( 0 in which any given expression l is to be interpreted. A MODAL OPERATOR rk is a special sym-I)ol in tile language which is interpreted as the relation Mo(hfl operators can combine with predicates to construct new predi(:atcs. If ¢ is a predicate, rk is a modal operator and w(¢) = cod(k) then we can define am interpretation, I(rk¢) C Wdom(k) , for rk¢, nanmly R~ I[I(¢)]. l~lrthcrmore, we define the type of the expression to be the (lomain of the fimctor: w(rk¢) = dom(k). The interpretation of any wellformed sentence in this language is a sul)set of the corresponding world I(¢) C_ W~(¢).From here on, we will assume that tile Rk,ken are functions, and call the corresponding operators of the language FUNCTORS. Functors simplify the interpretation of predicates: inverses of functions preserve intersection, so functors distribute over conjunction as well as disjunction.A path equation defines a predicate which selects entities that have the same result when passed through two different sequences of functions. Suppose that p and q are two sequences of functors with the same first domain and last codomain, and that the composition of the corresponding sequences of functions are P and Q respectively. Then the interpretation of p = q is the set of entities x in the common domain such that A predicate in the corresponding modal language, using only the characteristic predicates of the types and the functors, might be: head a meaning the set of non-null strings whose first letter is a, left head a A right head c to specify the context a__c, or head c A right(head a A right(head b A right null)).P(x) = Q(x).By the use of functors, we can move from one type to another, or from one item in a type to another item in the same type. Metaphorically, we will call the types joined by fimctors LOCATIONS, particularly when the type instances are only distinguished by flmctorial relationships with other types.In a complex structure, like a string, the functors provide a method for interrogating nearby parts of the the structure within a predicate applying at a given position. By an appropriate choice of types and functors, complex feature structures and/or non-linear representations can be defined. For the sake of simplicity, the discussion in the remainder of this paper will be restricted to strings constructed using the types and functors defined above.In model-theoretic terms, a constraint is any wellformed expression in the language to which an interpretation is attached. Phonologists also use the term, usually intending universal application. It will be used here for a single predicate applying at a particular location in structure.As an exmnple of a constraint, consider front vowel harmony in Turkish t. Informally, we can write this constraint as if the last vowel was front, so is the current one. In the format of a phonological rule, this might be written as [+front]C*J~ ~ [+front] , where C* stands for zero or more consonants. F is used to represent the disjunction of all of the front vowels.(4) Left = ~ (left head C h left Left)V left head F Constraint = head F V --,LeftIn (4) the left context is abstracted into a named predicate called Left. This is because the left context iterates over consonants. This iteration appears in the definition of Left as the recursive call: if the immediate left segment is a consonant, move left and check again. Left succeeds immediately if the immediate left segment is a front vowel.Note the the predicate defined here imposes no restrictions at all on where it applies except that it be a non-null string. On the other hand, it only applies at the current location in structure. The relationship betwecn constraints and locations is the topic of the next section; first in the discussion of features, and then in the prioritisation of default feature assignment.The question ariscs as" to what basic predicates should be used in defining the lexical specification of phonological items. Lexical specifications in phonology are traditionally built from binary features. While the the feature values usually correspond to a priori predicates, there is no reason why a feature cannot be defined for an arbitrary predicate: ¢ defining the feature [+¢] everywhere that ¢ is true and [-¢] everywhere that ¢ is false. This section includes discussion of two kinds of feature system here: A PRIORI and EXCEPTION-MARKING.Traditionally, the choice of features is made a priori (an A Priori Feature System --APFS). This does not mean that phonologists do not select their feature sets to suit their problems, rather that they do not approve of doing so. Instead, acoustic or articulatory grounds t "l~lrkish Ires eight vowels, a, e, i the back version of i, o and its front correlate 6, and u and the corresponding front vowel /i. are sought for a universal set of features which will serve for all analyses.Furthermore, features in traditional systems are context free. The predicates defining the features do not make reference to neighbouring structures, such as the segment to the right or the left, in order to determine the feature polarity in a given position. Feature values depend only on the segment at that position in the string.Continuing to draw our examples from Turkish vowels, front can be thought of as the predicate head (eV i V 6 V fi). This predicate is context-free: there are no uses of the functors left and right in the definition. We can define the feature values [+front] and [-front] as holding at each non-null position in the string where front is true and false respectively.A more adventurous feature system brings context together with the local segmental value to define its features. The question arises as to which predicates from this wider range should be chosen. The principle of Epicurus (Asmis, 1984) suggests that no choice should be made until direct evidence is adduced. In this domain the evidence comes in the form of a constraint on phonological structure. So, if it appears that ¢ is an interesting constraint on phonological structure, then [=t=¢] should be used as a feature. This choice is less ad hoc than introducing new predicates a priori.As an example of this kind of feature assignment, consider the constraint (4) applied to the word seviyorurn I like (cts), which has the structure shown in (5).mdl nut! T T .... n-, n n ~ i~n n-n ~ n-u n.n 6---r ,., n., ,*n (5) ..... , ........The features assigned by the constraint are shown in (6). For clarity, the segments and head functors are not shown. To make the clearer, the positive and negative feature marks are shown as ticks and crosses respectively.nl.dl mdl T' * T" I¢N jZ.j t rj-pZ ii-f/ zl.n ~ rl-if k'fs IonIn only one case does this feature assign a negative value, ie. there is only one exception to the constraint in this word. This exception is the occurrence of the back vowel o after the front vowel i. The segments themselves provide non-arbitrary context-free predicates which can be used as features. For example, we could define a feature [:t:a] which is true if and only if head a is true.These kind of feature systems are called EXCEPTION-MARKING FEATURE SYSTEMS (EMFSs) becm~se it is exceptions to identified constraints which define all but the most basic features.In EMFSs the number of features is likely to be much b~rger than in traditional systems. On the other hand, each of the features correspond to either a segment or a phonological constraint or a segment, so the system as a whole is ontologically simpler than a APFS. Nevertheless, unless some method of compression is used, EMFSs will demand verbose lexical forms. Two types of compression are familiar to, though seldom distinguished by, phonologists: redundancy and defaults 2. In terms of model theory the distinction is clear. Redundancy rules have no effect on the interpretation function I, while defaults modify it. This section discusses underspecification that eliminates redundancy. The next section discusses defaults.A predicate ¢ is FULLY SPECIFIED FOR another predicate ¢ if either ¢ is more specific than ¢, that is, I(¢) = I(¢)NI(¢), or ¢ contradicts ¢, I(¢)f'lI(¢) = 0. A FULLY SPECIFIED predicate is one which is fully specified for all other predicates.Intuitively, a fully specified predicate is one which is indivisible. There is no extra restriction which can be imposed which will make it more specific; it can only be contradicted. If ¢ is a fully specified predicate, then there is no point in adding further information to it.If the interpretation function I is computable, then each feature value at each position in a fully-specified structure can be calculated. If the conjunction of the feature predicate with the structure has a null interpretation, then the feature is false, otherwise it is true. Consequently, so long as a predicate remains fully specified, any feature specifications which are removed from it can be recovered.In APFSs, the constraints associated with features will not be very interesting. When the features are contextual constraints, however, regaining the full specification amounts to a process of phonological derivation albeit one of simultaneous application of constraints.Let us utilise the Turkish vowel set for another exampie. Suppose each vowel is assigned a feature, and so is the vowel harmony constraint, (4). For each vowel, x[ marks the presence of the vowel, × its absence. The same symbols mean the satisfaction of a constraint or its failure. Table 7shows redundant feature specifications with a box around them. The example word is severira I like. Features for the consonants are not shown for the sake of brevity.2Calder & Bird (1991) make this distinction using the (',l'SG-like terms feature-cooccurrence restrictions (FCi~s) ;tlld ti~ature-specification defaults (FSDs). Constraint(4) ~/ y/ ~/ ~/ ~ ~/. ~/ a X X X ~ X ~ X e X ~ X X X X ! X X X X IX I X i Z X X X X ~ X 0 X X X ~ X X i x x x x x x x x x x [xl x Ixl x U X X X X X X XNote that this is not the only possible selection of redundant specifications. If the vowel feature specifications are regarded as primary and non-redundant, then the constraint feature values can all be regarded as redundant. At this point we can define the declarative phonological formalism we are evaluating. It is an EMFS with redundant features removed, called Exception Theory (ET).Identifying fully specified predicates allows us to compress representations by removing predictable specifications from predicates. This compression method can be enhanced by modifying the interpretation fimction so that more predicates are fully specified.A DEFAULT is defined in terms of a special predicate which will not need to be specified in individual representations. A representation will be conjoined with the default predicate unless it is already fully specified for it.There may be a number of default predicates in a default system. For this reason the formal definition of the effect of defaults on the interpretation function has the recursive structure shown in (8):(s)x~,~(¢) = I~(¢) if ¢ is fully specified for 6 wrt Ia, or I~(¢) n Ia(6) otherwise.Each default predicate specifies its action at only one position in the structure. If the default is to apply at many positions in a structure, more default predicates must be added to cover each position in the structure.For example, take the default predicate ~ to be the as a default will have no effect on the interpretation. Thus the two orderings of the defaults produce conflicting interpretations. Since the two orderings produce different results, a decision about the ordering of defaults must be made.Ordering Defimlts need to be ordered. There are a number of ways that the ordering of groups of defaults can be specified. Three of these are presented here.One method for ordering defaults is to order the features they instantiate. We begin with an ordering on the features, so that, for example, feature [+F] has higher priority than feature [+G] If 5i is compatible with a predicate ¢, then there is a fully-specified restriction on ¢ which has no more than i occurrences of [-F] . The ordering on the defaults is imposed by requiring that for any feature [+Fi], with the corresponding predicate 6i, 5i has priority over 5j iffi < j.Suppose we already have a number of higher priority constraints on stress: that it can only be assigned once and in only one position within a syllable, and that consecutive syllables cannot be stressed. Collapsing the representation of syllables into a single symbol a for convenience, table 11gives the assignment of stress to a number of partially specified representations. The default feature is [+Stress] , and this is applied to minimise the number of failures. Ordering by position Another possibility is to order defaults by how far away from the starting position they specify their features. There are two simple ways of relating distance to priority: closer means higher priority, or further away means higher priority. The formal definitions for this kind of default ordering are straightforward. Suppose, once again, that [+F] is the feature value to be filled in by the defaults. Now, 6i will denote the specification of a default value at a distance of i functors to the left, or i to the right of the starting position.(12) 5i = 6o = ~i+ l (~i+1 = right~iA6i [+F] = ~0 left 5i V null right ~i V nullTo prefer near defaults, prefer Ji over 5j when i < j. For far defaults, do the reverse.Directional default preferences minfic the application of phonological rules in a left-to-right or right-to-left direction. Using this ordering, directional defaults (:an restrict some structures which the counting defaults cannot. Consider once again the stress assignments by defaults in table (11). Instead of simply trying to maximise the number of stresses, assume that the starting position is the left end of the word, and that near stresses are given priority. Under this system of defaults, the first of the three underspecified representations is rendered more specific, while the other two make the same restriction. These results are shown in Archangeli (1984) , differing in that the structures described are linear and segmental. This is, however, not a necessary limitation of the framework, and the definition of of underspecification theory presented here could be applied to autosegmental representations if suitable types and functors were defined for them. In UT, lexical specifications are made in terms of an a priori fixed set of features. For example, Archangeli & Pulleyblank (1989) (Archangeli, 1984:85) ensure that redundancy rules apply before the features they instantiate are referred to. Furthermore, these constraints apply as often as necessary (Archangeli & Pulleyblank, 1989:209-210) . This has the same effect as the automatic specification of redundant feature values in the (:urrent framework.Only one type of feature value is ever lexically specified in UT. Opposite feature values are filled in by default rules. This allows the feature specifications for some segments to be subspecifications of those for other se~lnelltS.Apart from the context-free features used ill lexical specifications, there are also context-sensitive constraints which are regarded in UT as fiflly-fledged phonological rules. For example, the Yoruba vowel harmony rule can be summarised as a vowel on the le~t of a [-ATR] vowel will also be [-ATR]. Regularity to this constraint in one position may conflict with regularity in another position. In UT, the defaults associated with such constraints are ordered by position: Yoruba vowel harmony applies right-to-left in the sense that constraint applications further from the beginning of the word have higher priority.This directionality is not the only ordering of defaults. As it happens, there are no [+high] The general structure of UT, therefore, is to have an a priori limited set of features for lexical specification and a set of defaults for these features and for constraints. The defaults associated with each feature or constraint are ordered by position.Optimality Theory (Prince & Smolensky, 1993) is apparently a very different theory, but, when classified in terms of its use of defaults, is actually quite similar.In contrast to UT, OT is deliberately vague about underlying representations. Instead of discussing the manipulation of representations directly, OT refers to their interpretations, terming them CANDIDATE SETS.Constraints in OT apply exactly like defaults. If they can be imposed without resulting in a contradiction (empty candidate set), then they are. Each constraint imposes a set of defaults, and these are primarily ordered by an extrinsic ordering placed on the constraints. If any two defaults pertaining to two constraints conflict, the default of the higher order constraint is preferred.As with UT, there is the possibility that tile imposition of the the santo constraint at different locations will conflict. Rather than ordering these defaults by position, they are ordered by the number of exceptions to the constraint that they allow. If there is a candidate form with a certain number of exceptions, all candidates with more exceptions will be eliminated by the default. This ordering on defaults is the ORDERING BY FAILURE COUNT described earlier.In contrast to the other two, more standard, phonological theories, Exception Theory does not use defaults. In ET, each lexicai form is fully specified, and any feature in it may be removed so long as this property is preserved.The set of features includes a feature for each segnmnt type, and a feature for each constraint. While this results in a large set of features, underspecification of redundant features means that many feature specifications may be eliminated. Nevertheless, there will be more feature specifications needed in ET than in, for example, UT, because of the absence of default values.On the other hand, because ET uses no defaults, there is no need for any form of constraint or rule ordering. All features have an immediate interpretation through the interpretation function, and so a minimum of computation is needed to identify the denotation of a representation. Early in this paper, four desiderata for morphophonological theories were introduced. This section considers whether using defaults is advantageous with respect to these desiderata.The first desideratum sought concise lexical representations for morphemes. Since default-based theories can also exploit underspecification of redundant feature values, they are at least as concise as non-default theories. If there are ever contrastive feature specifications, then they are more concise, allowing one side of the contrast to be left, as a default value to be instantiated. Note that the concept of conciseness which is being used here is feature.counting, not an informationtheoretic measure. In a direct application of information theory, contrasting a [+F] feature value with whitespace carries as much information as contrasting it with l-F] 3.Abstracting and recovering morphemes Defanlts also provide advantages in abstracting morpheme representations from which allomorphs can be aIt may be possible, nevertheless, to provide an information theoretic basis for the feature-counting notion by couching the feature specifications in a suitable descriptive language.recovered. As well as making representations more concise, using defaults allows more allomorphs to be brought together within a single phonological representation. As there are no feature changing rules in tile framework, all feature values in the abstract representation must survive to the surface in ca.oh allom,~rl~h. Conversely, the abstract representation can only contain feature specifications common to all of the allomorphs. So the upper bound on feature specifications for the abstract morpheme is the is the intersection of the featural specifications for all of the allomorphs of the morpheme.As an example, consider four allomorphs of the Turkish second person plural possessive suffix! -mxz, -iniz, -unuz and -ilniiz. While it is always possible to form abstract representations by intersecting feature values (the second desideratum), there is no guarantee that the allomorphs will be readily recoverable (third desideratum). If they are not recoverable, then there is no single featural generalisation which captures the phonological structure of the morphemes.One important question is whether defaults allow recoverable generalisations about a greater range of morphemes than non-default representations. The answer is yes. If the morphological alternations is onedimensional, then there is no difference between having defaults and not. Suppose 5 is a default predicate, and, equally, an exception feature. If all allomorphs are specified [+~] then the abstraction will share this feature, and so the default does not need to apply. Similarly if all allomorphs are specified [-6 ], so will the abstract forms be, and the default cannot apply. If the allomorphs vary in their specification for [±5], then the abstraction will not have include a specification for this feature. Consequently, the default will specify [+J] when the correct value is l-J], and so not fail to produce the correct result. In the non-default interpretation, the representation is never fully specified.On the other hand, if the morphological alternations form a two-dimensional paradigm, then it is possible that the paradigm might be decomposable into morphemes only with the use of defaults. Suppose, once again, that J is a default predicate and exception feature. The default feature value is [+5] . Suppose further, that there is a paradigm with the feature specification for [:t=5] shown in (15).[-~] [0~] [-~] [-~] [-~] [0~1 [-~] [+~1The margins show the 'morphemes' extracted by intersecting the feature values. The conjunction of the two [05] specifications is not fully specified for 5, and so its direct interpretation does not recover the corresponding component of the paradigm. If, however, the default [+6] is applied, the full specification of the paradigm is recovered.So it is possible to have paradigms where the morphological components cannot be assigned common phonological representations without the use of defaults 4.The final desideratum is the avoidance of a priori information in a model. UT makes use of an a priori set of features for lexical specification. As other generalisations in the formalism are only visible insofar as they affect the values of these features, this limits the possible constraints which can be identified. This is the reason why vowel harmonies such as that of Nez Perce are so problematic for phonologistsS: the sets of vowels used in the harmony do not have a neat definition in terms of traditional features.Greater claims about a priori features are made in OT. Prince & Smolensky (1993: 3) state that constraints are essentially universal and of very general formulation ... interlinguistic differences arise from the permutation of constraint-Tunking. In other words, all of the predicates which define features in OT are prior to the analysis of an individual language. In ET, very little is assumed a priori. Any constraint which captures interesting phonological generalisations about the phonology defines a feature which can be used to specify structure. Because ET does not use defaults, it need not be concerned with ordering constraints, only with finding them. Consequently, interlinguistic differences can only result from distinct sets of constraints.In this paper I have presented a rigorous framework for characterising theories that use defaults with phonological structure. The framework provides a straightforward characterisation of Underspecification Theory and Optimality Theory in terms of the action of defaults.Using this framework, I have shown that non-defanlt theories cannot be sure of capturing all of the generalisations which are available to default theories. For this reason, the non-default constraints learnt by programs suctl as ttmse described by Ellison (1992, forthconfing) , are not as powerful for morphophonological analysis as default-based theories. Furthermore, defaults lead to more concise, and consequently preferable, lexical representations.4If general predicates are permitted for specifying morphemes, rather than just featural specifications, the distinction between default and non-default systems disappears.If the entries in the l)aradigm are ~ij, define o~i to be Vj ~ij a.ml fl.j I.o be Ai((ij V "~,~i). Then, s(, long as |,ll~ t~i are disi,im:t (wiiich will l)e tim case if the (i.i are all distinct), then the i)~tradigm will be fully recoverable without defaults.5Anderson & Durand (1988) discuss some of this literature.The question, therefore, is how to enhance the learning algorithms to involve the use of defaults. The introduction of defaults means that constraints must be ordered; so learning must not only discover the right constraint, it must assign it a priority relative to other constraints. This makes the learning task consideral)le more complicated. However difficult a solution for this problem is to find, it will be necessary before m~u:hincgenerated analyses can be sure of competing successfully with man-made analyses. | 0 |
Transition-based dependency parsing is perhaps the most successful parsing framework in use today (Nivre, 2008) . This is due to the fact that it can process sentences in linear time (Nivre, 2003) ; is highly accurate (Zhang and Nivre, 2011; Bohnet and Nivre, 2012; ; and has elegant mechanisms for parsing non-projective sentences (Nivre, 2009) . As a result, there have been numerous studies into different transition systems, each with varying properties and complexities (Nivre, 2003; Attardi, 2006; Nivre, 2008; Nivre, 2009; Gómez-Rodríguez and Nivre, 2010; Choi and Palmer, 2011; Pitler and McDonald, 2015) .While connections between these transition systems have been noted, there has been little work on developing frameworks that generalize the phenomena parsed by these diverse systems. Such a framework would be beneficial for many reasons: It would provide a language from which we can theoretically compare known transition systems; it can give rise to new systems that could have favorable empirical properties; and an implementation of the generalization allows for comprehensive empirical studies.In this work we provide such a generalized transition-based parsing framework. Our framework can be cast as transition-based parsing as it contains both parser states as well as transitions between these states that construct dependency trees. As in traditional transition-based parsing, the state maintains two data structures: a set of unprocessed tokens (normally called the buffer); and a set of operative tokens (often called the stack).Key to our generalization is the notion of active tokens, which is the set of tokens in which new arcs can be created and/or removed from consideration. A parser instantiation is defined by a set of control parameters, which dictate: the types of transitions that are permitted and their properties; the capacity of the active token set; and the maximum arc distance. We show that a number of different transition systems can be described via this framework. Critically the two most common systems are covered -arc-eager and arc-standard (Nivre, 2008) . But also Attardi's non-projective (Attardi, 2006 ), Kuhlmann's hybrid system (Kuhlmann et al., 2011) , the directed acyclic graph (DAG) parser of Sagae and Tsujii (2008) , and likely others. More interestingly, the easy-first framework of Goldberg and Elhadad (2010) can be described as an arc-standard system with an unbounded active token capacity.We present a number of experiments with an implementation of our generalized framework. One major advantage of our generalization (and its implementation) is that it allows for easy exploration of novel systems not previously studied. In Section 5 we discuss some possibilities and provide experiments for these in Section 6.The active set of tokens ACTIVE(O) that can be operated on by the transitions is the set {O[min(|O|, K)], ..., O[1]}. K additionally determines the size of O at the start of parsing. E.g., if K = 2, then we populate O with the first two tokens. This is equivalent to making SHIFT deterministic while |O| < K. Let M(T ) be a multiset of transitions, such that if t ∈ M(T ), then t ∈ T . Note that M(T ) is a multiset, and thus can have multiple transitions of the same type. For each t ∈ M(T ), our generalization requires the following control parameters to be set (default in bold):1. Bottom-up: B ∈ {[t]rue, [f ]alse}.Whether creating an arc also reduces it. Specifically we will have two parameters, B L and B R , which specify whether LEFT/RIGHT-ARC actions are bottom up. We use the notation and say B = true to mean B L = B R = true. For example B L = true indicates that ← i,j is immediately followed by a reduce − i .2. Arc-Shift:S ∈ {[t]rue, [f ]alse}.Whether creating an arc also results in SHIFT. Specifically we will have two parameters, S L and S R , which specify whether LEFT/RIGHT-ARC actions are joined with a SHIFT. We use the notation and say S = true to mean S L = S R = true. For example S L = true indicates that ← i,j is immediately followed by +. | 0 |
Topic models, such as Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 2017) and Latent Dirichlet Allocation (LDA) (Blei et al., 2003) , play significant roles in helping machines interpret text documents. Topic models consider documents as a bag of words. Given the word information, topic models formulate documents as mixtures of latent topics, where these topics are generated via the multinomial distributions over words. Bayesian methods are utilized to extract topical structures from the document-word frequency representations of the text corpus. Without supervision, however, it is found that the topics generated from these models are often not interpretable (Chang et al., 2009; Mimno et al., 2011) . In recent studies, incorporating knowledge of different forms as a supervision has become a powerful strategy for discovering meaningful topics Figure 1 : An overview of the proposed TMKGE framework. Entities are shared by both documents and knowledge graphs. Entity embeddings generated by TransE a knowledge graph embedding package are passed into TMKGE to generate hidden topics.Most conventional approaches take prior domain knowledge into account to improve the topic coherence Hu et al., 2014; Jagarlamudi et al., 2012; Doshi-Velez et al., 2015) . One commonly used domain knowledge is based on word correlations Chen and Liu, 2014) . For example, must-links and cannot-links among words are generated by domain experts to help topic modeling . Another useful form of knowledge for topic discoveries is based on word semantics Chemudugunta et al., 2008; Hu et al., 2014; Jagarlamudi et al., 2012; Doshi-Velez et al., 2015) . In particular, word embedding (Pennington et al., 2014; Goldberg and Levy, 2014) , in which bag of words are transformed into vector representations so that contexts are embedded into those word vectors, are used as semantic regularities to enhance topic models (Nguyen et al., 2015; Li et al., 2016; Das et al., 2015; Batmanghelich et al., 2016) .Knowledge graph (KG) embedding (Bordes et al., 2013 ) learns a low-dimensional continuous vector space for entities and relations to preserve the inherent structure of KGs. Yao et al. (2017) proposes KGE-LDA to incorporate embeddings of KGs into topic models to extract better topic representations for documents and shows promising performance. However, KGE-LDA forces words and entities to have identical latent representations, which is a rather restrictive assumption that prevents the topic model from recovering correct underlying latent structures of the data, especially in scenarios where only partial KGs are available.This paper develops topic modeling with knowledge graph embedding (TMKGE), a hierarchical Dirichlet process (HDP) based model to extract more coherent topics by taking advantage of the KG structure. Unlike KGE-LDA, the proposed TMKGE allows for more flexible sharing of information between words and entities, by using a multinomial distribution to model the words and a multivariate Gaussian mixture to model the entities. With this approach, we introduce two proportional vectors, one for words and one for entities. In contrast, KGE-LDA only uses one, shared by both words and entities. Similar to HDP, TMKGE includes a collection of Dirichlet processes (DPs) at both corpus and document levels. The atoms of corpus-level DP form the base measure for document levels DPs of words and entities. Therefore, the atoms of corpus-level DP can represent word topics, entity mixture components, or both of them. Figure 1 provides an overview of TMKGE, where two sources of inputs, bag of words and KG embedding, extracted from corpus and KGs respectively, are passed into TMKGE. As a nonparametric model, TMKGE does not assume a fix number of topics or entity mixture components as constraints. Instead, it learns the number of topics and entity mixture components automatically from the data. Furthermore, an efficient online variational inference algorithm is developed, based on Sethuraman's stick-breaking construction of HDP (Sethuraman, 1994) . We in fact construct stick-breaking inference in a minibatch fashion (Wang et al., 2011; Bleier, 2013) , to derive a more efficient and scalable coordinateaccent variational inference for TMKGE.Bayesian nonparametric model to extract more coherent topics by taking advantage of knowledge graph structures. We introduce two proportional vectors for more flexible sharing of information between words and entities. We derive an efficient and scalable parameter estimation algorithm via online variational inference. Finally, we empirically demonstrate the effectiveness of TMKGE in topic discovering and document classification. | 0 |
Systems which map natural language query phrases onto database access query statements, (Trost et al., 1987) , (Androutsopoulos et al., 1995) , provide a natural communication environment. Yet this means that they must be able to handle vagueness, incompleteness or even ungrammaticality as these phenomena tend to be associated with language use under specific external constraints as e.g. in situations concerned with database access. In this paper, we are going to describe the natural language understanding component of the TAMIC-P 1 system for German, which interprets natural language queries addressing the databases of the Austrian social insurance institution for farmers (Sozialversicherungsanstalt der Bauern, SVB). The input queries are parsed and mapped onto a representation in quasi-logical form which serves as basis for the required database access. Simultaneously, the queries are searched for domain-specific cue words which are part of a lexical knowledge base (cf. e.g. (Christ, 1994) ) This knowledge base is also accessible from the user's query interface. Additionally, many legal terms which occur in the queries may be linked to the underlying legal regulations. These regulations are available in hypertext format and allow for browsing via the user interface. Domain-specific help files will eventually also be integrated in the data to be accessed via the user interface.An evaluation of the user requirements in this specific natural language task yielded the result that the prospective users of the system, social insurance clerks at local information days, feel most comfortable if they have the possibility to input their queries as noun phrases. For the natural language query interpretation task, this implies that complex and heterogeneous noun phrases have to be interpreted adequately.In the following sections, we are going to describe the general aim of the TAMIC-P system, the system scenario, the kind of input the system has to be able to deal with and the parsing process. We will then give an outline of how some complex queries are treated in parsing, and finally, we are going to conclude with observations (including their implications for further work) derived from the current system setting and its application in the process of parsing NP queries in the TAMIC-P domain.2 The TAMIC-P system for German in the Austrian scenarioThe TAMIC-P system is realized in collaboration with Italian and German 2 project partners.2DFKI GmbH, Saarbr/icken, Germany, has contributed to the German language module and the associated knowledge sources. The authors would therefore like to thank Stephan Busemarm for his work in this area, particularly the development of the grammar.It consists of two language components; one for Italian in the Italian scenario and one for German in the Austrian scenario. While the interface structure (developed by the Italian partner) and the configuration of the main modules are mostly identical in the two scenarios and while both applications aim at interpreting NP queries, the two natural language components represent two distinct approaches to the query interpretation task. In this paper, we will focus on the query interpretation module as well as the required knowledge sources for dealing with German NPs.The scenario-and language-specific part of the system consists mainly of the actual parsing component, a lexical knowledge base (LKB) representing the entities denoted in the queries and their relations to each other, the conceptual data model (CDM) specifying which entities can be found in the various databases, and the logical data model (LDM) which approximates the actual data as they occur in the domain.I QUERY I QUERY INTERPRETATION PERSONAL RECORD MODE LEGAL NORMS MODE LEXICAL STRUCTURE MODEfigl: Architecture of the interpretation module for German queries in the Austrian scenarioThe parsing component and the knowledge sources are connected closely. In order to construct a quasi-logical-form expression for a natural language query encountered as system input, the parsing component has to consult the relations between the denoted entities as they are represented in the database. These relations are modelled in the CDM, which is a unified and simplified version of the logical data model represented in the database. Yet it is not sufficient to construct the QLF representation: as cue words have to be identified in the queries in order to present legal norms and domainspecific lexica] relations, these contexts have to be built and embedded in the framework of legal regulations and concept structures. Consequently, there are three output modes (personal information, legal texts, legal lexicon) which are presented on the interface as a card-index display. A fourth output mode (domain-specific help files) will eventually be added to the system.3 The TAMIC-P scenario Queries which have to be interpreted in the TAMIC-P domain concern all areas of social security, i.e. pension, health and accident insurance. In order to provide the required information, several databases have to be consulted. As social insurance employees in advisory dialogues have to work under extreme cognitive pressure due to limited time and other situational constraints, it is the aim of the TAMIC-P project to simplify this advisory process by providing one interface for the various different tasks which have to be performed:• consult citizen's insurance data in several databases• consult the relevant laws arid regulations• consult a legal glossary for related terms and conceptsIn TAMIC-P, these tasks are based on the interpretation of natural language queries and the interaction with the interface. For the user, this verbal and graphical interaction faciliates obtaining the requested results from the databases as well as the norms and the compilation of legal terms. At the same time, clients using the system have to rely on great robustness in the query interpretation task so that queries do not result in the distribution of false, inadequate or incomplete information. personal data as well as legal affairs. A fairly typical query would be for instanceAt the present state of the system, this query has three dimensions with regard to its interpretation: First, it concerns the personal insurance records stored for a specific persion (this person is determined by the context) who may or may not have acquired the requested type of insurance months. Second, it refers to a special official status of insurance months which is defined in legal texts describing the benefits which can be derived from different kinds of insurance times. Third, the query and its underlying concepts have to be compared to related queries and concepts: I.e. in a wordnet-type structure, Ersatzzeiten ('exemption times') is in this specific use synonymous to Ersat=monate ('exemption months') and belongs to the category of Versicherungszeiten ('insurance times'). Furthermore, there are several different types of specific 'exemption times because of child raising' (e.g. raising adopted children, grandchildren etc.) which have to be considered in an evaluation of the insurance records, particularly if a citizen applies for retirement pension.An evaluation of user queries has shown that the SVB clerks were reluctant to form full, grammatically elaborate sentences; instead, they preferred to rely on noun phrases denoting the requested concepts from the social insurance domain. Regarding the example, this does not come as a surprise as -apart from the additional cognitive effort which is required in using a complete sentence -using an NP seems to be a natural way of including the three dimensions of the specific personal information from the insurance records, the legal norms, and the lexical knowledge base in the noun phrase quoted above. In contrast, three complete sentences have to be formed to refer to the same dimensions if noun phrases are avoided. The corresponding English phrases for the German full examples are:• Which exemption times because of child raising are stored for Mrs/Mr x?• What are the legal regulations concerning exemption times because of child raising?• What are the relevant lezical properties associated with exemption times because of child raising?Note that the second and the third example use some kind of metalanguage to link 'exemption times because of child raising' to the required dimension. In contrast to this, the noun phrase 'exemption times because of child raising' needs no metalanguage und points elliptically to all three dimensions. Therefore, if only noun phrases are used, their inherent vagueness and incompleteness at least provide means to consult all the relavant information sources simultaneously.For the NP analysis, this implies that much effort has to be invested in modelling the entities and relations denoted by the linguisti c expressions, and certainly also the mapping between the linguistic and the conceptual levels.Tighly packed linguistic structures refer to complex conceptual structures. The dependencies in the conceptual model are mirrored in the linguistic expressions. As we are dealing with noun phrase queries, it is not possible to regard verbs as assigning the key structural dependencies. Yet it can be said that certain conceptually pivotal nouns 'subcategorize' (to use this term loosely for illustration purposes) for specific linguistic objects which are determined by the underlying conceptual data model as well as certain conventions of use. These 'subcategorization relations' are often confirmed by prepositions with weak semantics:Ersatzzeiten wegen Kindererziehung 'Exemption times because of child raising.'The function of the preposition is to indicate the relation. In German, these types of noun phrase are often turned into a compound with the meaning remaining unchanged:Kinde rerziehu ngse rsa tzzeite n 'child raising exemption times'These remarks already describe the NP typology encountered in queries: complex NP clusters, NP-PP clusters (with faded prepositional semantics) and complex compounds. Of course, any combinations of the three types may also occur.5 Parsing IWP queries 5.1 Compositionality and non-compositionality Generally, the parsing process in the TAMIC-P query interpretation component for the Austrian scenario relies strongly on the hypothesis that the semantics of a phrase (represented for example in quasi-logical form) can be derived by composition of the QLFs of the parts the phrase is composed of. This implies that linguistic paraphrases which denote the same object or set of objects have to eventually end up in identical representations. As we are dealing with a limited domain, which usually restricts the number of options available for interpretation, this approach is feasible. "2ht so far, we have no possibility of dealing with queries beyond the field of social-insurance. The domain also ensues that the conceptual data model, representing entities and relations in the actual database, often contains simple objects, attributes or attribute values which are referred to by complex query elements on the natural-language level. In order to tackle this problem, a filter mechanism is used which• treats all natural language utterances in a compositional manner, possibly involving QLF prediates (originating from the lexicon) that do not denote a CDM object and• applies a set of substitutions to the QLF resulting from the parse that transform the complex description into the simplistic one contained in the CDM.Since the number of such 'noncompositional' objects in the CDM is limited and their 'compositional' meaning in terms of a QLF is unique, not very many substitution definitions are required, and it is not difficult to come up with them. At the same time, we are aware of the fact that a larger domain might require a broader analysis approach (Rayner, 1993) . | 0 |
More than 20 years after it was first published, the three-stage architecture for Natural Language Generation (NLG) systems described by Reiter (1994) is still frequently cited. According to his architecture, most NLG systems of the time consisted of a pipeline with three steps: Content Determination, Sentence Planning, and Surface Realisation (or Surface Generation). Today, stochastic approaches to NLG are very popular, which often use a black box approach instead of a modular pipeline (cf. e.g. Dušek et al. (2018) ).Nevertheless, rule-based systems still play a crucial role, especially in application contexts, because they provide advantages like higher controllability. SimpleNLG, developed by Gatt and Reiter (2009) is arguably the most popular open source realisation engine. It is implemented in Java and its current Version (4.4.8) is available under the Mozilla Public License (MPL). 1 Since it was published in 2009, SimpleNLG was adapted to seven other languages, these are (in chronological order): German (Bollmann, 2011) , French (Vaudry and Lapalme, 2013) , Italian (Mazzei et al., 2016) , Spanish (Ramos-Soto et al., 2017) , Dutch (de Jong and Theune, 2018), Mandarin (Chen et al., 2018) , and Galician (Cascallar-Fuentes et al., 2018) .Unfortunately, the German version of Sim-pleNLG 2 is not maintained anymore and is based on the outdated third version of SimpleNLG, which used a more restrictive license that prohibited commercial use. (Bollmann, 2019) Moreover, the existing German version also comes with a very limited lexicon, consisting of just around 100 lemmata. It also does not automatically recognise and handle separable verbs like "abfahren" ("to depart") or "einkaufen" ("to purchase"). The only openly available alternative is a German grammar for OpenCCG (Vancoppenolle et al., 2011) , which is even more limited with regard to both grammatical coverage and its lexicon.Therefore, we decided to develop SimpleNLG-DE, a new German version of SimpleNLG, implemented from scratch, based on Sim-pleNLG 4.4.8 and the MPL. SimpleNLG-DE comes with a standard lexicon containing more than 100,000 lemmata and is available from https://github.com/sebischair/ SimpleNLG-DE. | 0 |
Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) . Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects. A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase). Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) . Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications. For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) . These works are related to review generation since they can provide prior knowledge to supervise the generative process. We are interested in exploring how such knowledge (e.g. extracted aspects) can be used in the review generation task.In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge. Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g. brand words, technical specifications, etc.). We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder. The sequence encoder uses a gated recurrent unit0 0 0 … 1 0 0 1 0 … 0 0Latent factor Aspect-aware factor Embedding layers (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships. The aspect-aware representation is helpful to discover what each user is likely to discuss about each item. Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer. The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.ℎ 1 ℎ 2 ℎ 4 ℎ 5 ℎ 3 ℎ 5 ℎ 5 ℎ | 0 |
Semantic role labeling (SRL) is the task of labeling the predicate-argument structures of sentences with semantic frames and their roles (Baker et al., 1998; Palmer et al., 2005) . It has been found useful for a wide variety of NLP tasks such as question-answering (Shen and Lapata, 2007) , information extraction (Fader et al., 2011) and machine translation (Lo et al., 2013) . A major bottleneck impeding the wide adoption of SRL is the need for large amounts of labeled training data to capture broad-coverage semantics. Such data requires trained experts and is highly costly to produce (Hovy et al., 2006) . Crowdsourcing SRL Crowdsourcing has shown its effectiveness to generate labeled data for a range of NLP tasks (Snow et al., 2008; Hong and Baker, 2011; Franklin et al., 2011) . A core advantage of crowdsourcing is that it allows the annotation workload to be scaled out among large numbers of inexpensive crowd workers. Not surprisingly, a number of recent SRL works have also attempted to leverage crowdsourcing to generate labeled training data for SRL and investigated a variety of ways of formulating crowdsourcing tasks (Fossati et al., 2013; Pavlick et al., 2015; . All have found that crowd feedback generally suffers from low interannotator agreement scores and often produces incorrect labels. These results seem to indicate that, regardless of the design of the task, SRL is simply too difficult to be effectively crowdsourced. Proposed Approach We observe that there are significant differences in difficulties among SRL annotation tasks, depending on factors such as the complexity of a specific sentence or the difficulty of a specific semantic role. We therefore postulate that a subset of annotation tasks is in fact suitable for crowd workers, while others require expert involvement. We also postulate that it is possible to use a classifier to predict whether a specific task is easy enough for crowd workers.Based on these intuitions, we propose CROWD-IN-THE-LOOP, a hybrid annotation approach that involves both crowd workers and experts: All annotation tasks are passed through a decision function (referred to as TASKROUTER) that classifies them as either crowd-appropriate or expertrequired, and sent to crowd or expert annotators accordingly. Refer to Figure 1 for an illustration of this workflow. We conduct an experimental evaluation that shows (1) that we are able to design a classifier that can distinguish between crowd-appropriate and expert-required tasks at very high accuracy (96%), and (2) that our proposed workflow allows us to pass over two-thirds of the annotation workload to crowd workers, thereby significantly reducing the need for costly expert involvement. Contributions In detail, our contributions are:• We propose CROWD-IN-THE-LOOP, a novel approach for creating annotated SRL data with both crowd workers and experts. It reduces overall labeling costs by leveraging the crowd whenever possible, and maintains annotation quality by involving experts whenever necessary.• We propose TASKROUTER, an annotation task decision function (or classifier), that classifies each annotation task into one of two categories: expert-required or crowdappropriate. We carefully define the classification task, discuss features and evaluate different classification models.• We conduct a detailed experimental evaluation of the proposed workflow against several baselines including standard crowdsourcing and other hybrid annotation approaches. We analyze the strengths and weaknesses of each approach and illustrate how expert involvement is required to address errors made by crowd workers.Outline This paper is organized as follows: We first conduct a baseline study of crowdsourcing SRL annotation, and analyze the difficulties of relying solely on crowd workers (Section 2). Based on this analysis, we define the classification problem for CROWD-IN-THE-LOOP, discuss the design of our classifier, and evaluate its accuracy (Section 3). We then employ this classifier in the pro-posed CROWD-IN-THE-LOOP approach and comparatively evaluate it against a number of crowdsourcing and hybrid workflows (Section 4). We discuss related work (Section 5) and conclude the study in Section 6. | 0 |
One of the main challenges faced by speech scientist is the unavailability of the three important resources. The prime and most importantly, speech corpus (Speech Database), pronunciation dictionary and transcription file. Very fewer efforts have been made in Indian languages to make these resources available to public compared to English. Creation of these resources is time consuming, boredom and needs so much man power. Creating a well defined pronunciation dictionary needs through knowledge from phonetics, phonological rules, syntactic and semantic structure of the language.It is necessary to have databases which comprises of appropriate sentences spoken by the typical users in realistic acoustic environment. Speech databases can be divided into two groups: (i) a database of speech normally spoken in a specific task domain. In this case, small amount of speech is sufficient to achieve acceptable recognition accuracy. (ii) a general purpose speech database that is not tuned to a particular task domain but consists of general text and hence can be used for recognition of any sentence in that language. The problem with most speech recognition systems is insufficient training data containing speech variations (spontaneous speech) caused by speaker variances (cover large number of speakers). To overcome these problems, a large vocabulary speech database is required to build a robust recognizer. The purpose of selection of phonetically-rich sentences is to provide a good coverage of pairs of phones in the sentence. The current work also aims at the development of databases for Malayalam speech recognition that will facilitate for acoustic phonetic studies, training and testing of automatic speech recognition systems. It is anticipated that the availability of this speech corpus would also stimulate the basic research in Malayalam acoustic-phonetics and phonology.In this paper three sets of databases have been created ( task specific databases , a general purpose database and a specially designed database for unique phoneme analysis ) . The task specific database includes three domain based databases i.e isolated digit speech database, connected digit speech database and continuous speech database. The general purpose database includes a set of phoneme class wise speech database. Database designed for unique phoneme analysis includes, specially designed 32 minimal pair of words as well as a set of words which include unique phonemes in any word positions. Section 2 discusses phonetic chart of Malayalam language and in section 3 the text corpus that has been prepared is detailed. In section 4 the method of speech data collection is being elaborated. The phoneme list prepared for the work is explained in section 5 followed by the creation of pronunciation dictionary in section 6. In section 7 the format and model of the transcription files being prepared is explained. Section 8 gives the results of various speech recognition tasks.has 52 consonant phonemes, encompassing 7 places of articulation and 6 manners of articulation, as shown in Table 1 below. In terms of manner of articulation, plosives are the most complicated, for they demonstrate a six-way distinction in labials, dentals, alveolar, retroflex, palatals, velars and glottal [1] . A labial plosive, for example, is either voiceless or voiced. Within voiceless labial plosives, a further distinction is made between aspirated and un-aspirated ones whereas for voiced labial plosives the distinction is between modal-voiced and breathy-voiced ones. In terms of place of articulation, retroflex are the most complex because they involve all manners of articulation except for semi vowels [2] . Phonetic chart as presented by Kumari, 1972 [3] for Malayalam language is given in table 1 and the same has been referred for this paper.For all speech sounds, the basic source of power is the respiratory system pushing air out of the lungs. Sounds produced when the vocal cords are vibrating are said to be voiced, where the sound produced when the vocal cords are apart are said to be voiceless [4] . The shape and size of the vocal tract is a very important factor in the production of speech. The parts of the vocal tract such as the tongue and the 229 2 | P a g e lips that can be used to form sounds are called articulators ( fig .1 ). The movements of the tongue and lips interacting with the roof of the mouth (palate) and the pharynx are part of the articulatory process [5] | 0 |
With the advance of deep learning in text analytics, many benchmarks for text analytics tasks have been significantly improved in the last four years. For this reason, Zurich University of Applied Sciences (ZHAW) and SpinningBytes AG are collaborating in a joint research project to develop stateof-the-art solutions for text analytics tasks in several European languages. The goal is to adapt and optimize algorithms for tasks like sentiment analysis, named entity recognition (NER), topic extraction etc. into industry-ready software libraries.One very challenging task is automatic sentiment analysis. The goal of sentiment analysis is to classify a text into the classes positive, negative, mixed, or neutral. Interest in automatic sentiment analysis has recently increased in both academia and industry due to the huge number of documents which are publicly available on social media. In fact, there exist various initiatives in the scientific community (such as shared tasks at Se-mEval (Nakov et al., 2016) or TREC (Ounis et al., 2008) ), competitions at Kaggle 1 , special tracks at major conferences like EMNLP or LREC, and several companies have built commercial sentiment analysis tools (Cieliebak et al., 2013) .Deep learning for sentiment analysis. Deep neural networks have become very successful for sentiment analysis. In fact, the winner and many top-ranked systems in SemEval-2016 were using deep neural networks (SemEval is an international competition that runs every year several tasks for semantic evaluation, including sentiment analysis) (Nakov et al., 2016 ). The winning system uses a multi-layer convolutional neural network that is trained in three phases. For English, this system achieves an F1-score of 62.7% on the test data of SemEval-2016 (Deriu et al., 2016) , and top scores on test data from previous years. For this reason, we decided to adapt the system for sentiment analysis in German. Details are described in Section 4.A new corpus for German sentiment. In order to train the CNN, millions of unlabeled and weakly-labeled German tweets are used for creating the word embeddings. In addition, a sufficient amount of manually labeled tweets is required to train and optimize the system. For languages such as English, Chinese or Arabic, there exist plenty of labeled training data for sentiment analysis, while for other European languages, the resources are often very limited (cf. "Related Work"). For German, in particular, we are only aware of three sentiment corpora of significant size: the DAI tweet data set, which contains 1800 German tweets with tweet-level sentiments (Narr et al., 2012) ; the MGS corpus, which contains 109,130 German tweets (Mozetič et al., 2016) ; and the PotTS corpus, which contains 7992 German tweets that were annotated on phrase level (Sidarenka, 2016) . Unfortunately, the first corpus is too small for training a sentiment system, the the second corpus has a very low inter-annotator agreement (α = 0.34), indicating low-quality annotations, and the third corpus is not on sentence level.For this reason, we decided to construct a large sentiment corpus with German tweets, called SB10k. This corpus should allow to train highquality machine learning classifiers. It contains 9783 German tweets, each labeled by three annotators. Details of corpus construction and properties are described in Section 3.Benchmark for German Sentiment. We evaluate the performance of the CNN on the three German sentiment corpora CAI, MGS, and SB10k in Section 5. In addition, we compare the results to a baseline system, a feature-based Support Vector Machine (SVM). To our knowledge, this is the first large-scale benchmark for sentiment analysis on German tweets.Main Contributions. Our main contributions are:• Benchmarks for sentiment analysis in German on three corpora.• A new corpus SB10k for German sentiment with approx. 10000 tweets, manually labeled by three annotators.• Publicly available word embeddings trained on 300M million German tweets (using word2vec), and modified word embeddings after distant-supervised learning with 40M million weakly-labeled sentiment tweets.The new corpus, word embeddings for German (plain and fully-trained) and source code to re-run the benchmarks are available at www.spinningbytes.com/resources. | 0 |
Understanding a story requires understanding sequences of events. It is thus vital to model semantic sequences in text. This modeling process necessitates deep semantic knowledge about what can happen next. Since events involve actions, participants and emotions, semantic knowledge about these aspects must be captured and modeled.Consider the examples in Figure 1 . In Ex.1, we observe a sequence of actions (commit, arrest, charge, try) , each corresponding to a predicate Ex.1 (Actions -Frames) Steven Avery committed murder. He was arrested, charged and tried. Opt.1 Steven Avery was convicted of murder. Opt.2 Steven went to the movies with friends. Alter. Steven was held in jail during his trial.Ex.2 (Participants -Entities) It was my first time ever playing football and I was so nervous. During the game, I got tackled and it did not hurt at all! Opt.1 I then felt more confident playing football. Opt.2 I realized playing baseball was a lot of fun. Alter. However, I still love baseball more.Ex.3 (Emotions -Sentiments) Joe wanted to become a professional plumber. So, he applied to a trade school. Fortunately, he was accepted. Opt.1 It made Joe very happy. Opt.2 It made Joe very sad. Alter. However, Joe decided not to enroll because he did not have enough money to pay tuition. Figure 1 : Examples of short stories requiring different aspects of semantic knowledge. For all stories, Opt.1 is the correct follow-up, while Opt.2 is the contrastive wrong follow-up demonstrating the importance of each aspect. Alter. showcases an alternative correct follow-up, which requires considering different aspects of semantics jointly. frame. Clearly, "convict" is more likely than "go" to follow such sequence. This semantic knowledge can be learned through modeling frame sequences observed in a large corpus. This phenomena has already been studied in script learning works (Chatman, 1980; Chambers and Jurafsky, 2008b; Ferraro and Van Durme, 2016; Pichotta and Mooney, 2016a; Peng and Roth, 2016) . However, modeling actions is not sufficient; participants in actions and their emotions are also important. In Ex. 2, Opt.2 is not a plausible answer because the story is about "football", and it does not make sense to suddenly change the key en- Table 1 : Comparison of generative ability for different models. For each model, we provide Ex.1 as context and compare the generated ending. 4-gram and RNNLM models are trained on NYT news data while Seq2Seq model is trained on the story data (details see Sec. 5). These are models operated on the word level. We compare them with FC-SemLM (Peng and Roth, 2016) , which works on frame abstractions, i.e. "predicate.sense". For the proposed FES-LM, we further assign the arguments (subject and object) of a predicate with NER types ("PER, LOC, ORG, MISC") or "ARG" if otherwise. Each argument is also associated with a "[new/old]" label indicating if it is first mentioned in the sequence (decided by entity co-reference). Additionally, the sentiment of a frame is represented as positive (POS), neural (NEU) or negative (NEG). FES-LM can generate better endings in terms of soundness and specificity. The FES-LM ending can be understood as "[Something] convict a person, who has been mentioned before (with an overall negative sentiment)", which can be instantiated as "Steven Avery was convicted." given current context. tity to "baseball". In Ex.3, one needs understand that "being accepted" typically indicates a positive sentiment and that it applies to "Joe".As importantly, we believe that modeling these semantic aspects should be done jointly; otherwise, it may not convey the complete intended meaning. Consider the alternative follow-ups in Figure 1 : in Ex.1, the entity "jail" gives strong indication that it follows the storyline that mentions "murder"; in Ex.2, even though "football" is not explicitly mentioned, there is a comparison between "baseball" and "football" that makes this continuation coherent; in Ex.3, "decided not to enroll" is a reasonable action after "being accepted", although the general sentiment of the sentence is negative. These examples show that in order to model semantics in a more complete way, we need to consider interactions between frames, entities and sentiments.In this paper, we propose a joint semantic language model, FES-LM, for semantic sequences, which captures Frames, Entities and Sentiment information. Just as "standard" language models built on top of words, we construct FES-LM by building language models on top of joint semantic representations. This joint semantic representation is a mixture of representations corre-sponding to different semantic aspects. For each aspect, we capture semantics via abstracting over and disambiguating text surface forms, i.e. semantic frames for predicates, entity types for semantic arguments, and sentiment labels for the overall context. These abstractions provide the basic vocabulary for FES-LM and are essential for capturing the underlying semantics of a story. In Table 1, we provide Ex.1 as context input (although FC-SemLM and FES-LM automatically generate a more abstract representation of this input) and examine the ability of different models to generate an ending. 4-gram, RNNLM and Seq2Seq models operate on the word level, and the generated endings are not satisfactory. FC-SemLM (Peng and Roth, 2016) works on basic frame abstractions and the proposed FES-LM model adds abstracted entity and sentiment information into frames. The results show that FES-LM produces the best ending among all compared models in terms of semantic soundness and specificity.We build the joint language model from plain text corpus with automatic annotation tools, requiring no human effort. In the empirical study, FES-LM is first built on news documents. We provide perplexity analysis of different variants of FES-LM as well as for the narrative cloze test, where we test the system's ability to recover a randomly dropped frame. We further show that FES-LM improves the performance of sense disambiguation for shallow discourse parsing. We then re-train the model on short commonsense stories (with the model trained on news as initialization). We perform story cloze test (Mostafazadeh et al., 2017) , i.e. given a four-sentence story, choose the fifth sentence from two provided options. Our joint model achieves the best known results in the unsupervised setting. In all cases, our ablation study demonstrates that each aspect of FES-LM contributes to the model.The main contributions of our work are: 1) the design of a joint neural language model for semantic sequences built from frames, entities and sentiments; 2) showing that FES-LM trained on news is of high quality and can help to improve shallow discourse parsing; 3) achieving the state-of-the-art result on story cloze test in an unsupervised setting with the FES-LM tuned on stories. | 0 |
In Question Answering the correct answer can be formulated with different but related words than the question. Connecting the words in the question with the words in the candidate answer is not enough to recognize the correct answer. For example the following question from TREC 2004 (Voorhees, 2004) : Q: (boxer Floyd Patterson) Who did he beat to win the title? has the following wrong answer: WA: He saw Ingemar Johanson knock down Floyd Patterson seven times there in winning the heavyweight title. Although the above sentence contains the words Floyd, Patterson, win, title, and the verb beat can be connected to the verb knock down using lexical chains from WordNet, this sentence does not answer the question because the verb arguments are in the wrong position. The proposed answer describes Floyd Patterson as being the object/patient of the beating event while in the question he is the subject/agent of the similar event. Therefore the selection of the correct answer from a list of candidate answers requires the check of additional constraints including the match of verb arguments.Previous approaches to answer ranking, used syntactic partial matching, syntactic and semantic relations and logic forms for selecting the correct answer from a set of candidate answers. Tanev et al. (Tanev et al., 2004) used an algorithm for partial matching of syntactic structures. For lexical variations they used a dependency based thesaurus of similar words (Lin, 1998) . Hang et al. (Cui et al., 2004) used an algorithm to compute the similarity between dependency relation paths from a parse tree to rank the candidate answers. In TREC 2005 , Ahn et al. (Ahn et al., 2005 used Discourse Representation Structures (DRS) resembling logic forms and semantic relations to represent questions and answers and then computed a score "indicating how well DRSs match each other". Moldovan and Rus (Moldovan and Rus, 2001) transformed the question and the candidate answers into logic forms and used a logic prover to determine if the candidate answer logic form (ALF) entails the question logic form(QLF). Continuing this work Moldovan et al. (Moldovan et al., 2003 ) built a logic prover for Question Answering. The logic prover uses a relaxation module that is used iteratively if the proof fails at the price of decreasing the score of the proof. This logic prover was improved with temporal context detection (Moldovan et al., 2005) .All these approaches superficially addressed verb lexical variations. Similar meanings can be expressed using different verbs that use the same arguments in different positions. For example the sentence:John bought a cowboy hat for $50 can be reformulated as:John paid $50 for a cowboy hat. The verb buy entails the verb pay however the arguments a cowboy hat and $50 have different position around the verb.This paper describes the approach for propagating the arguments from one verb to another using lexical chains derived using WordNet (Miller, 1995) . The algorithm uses verb argument structures created from VerbNet syntactic patterns (Kipper et al., 2000b) .Section 2 presents VerbNet syntactic patterns and the machine learning approach used to increase the coverage of verb senses. Section 3 describes the algorithms for propagating verb arguments. Section 4 presents the results and the final section 5 draws the conclusions. | 0 |
Transition-based dependency parsing (Yamada and Matsumoto, 2003; Nivre, 2008) has attracted considerable attention, not only due to its high accuracy but also due to its small running time. The latter is often realized through determinism, i.e. for each configuration a unique next action is chosen. The action may be a shift of the next word onto the stack, or it may be the addition of a dependency link between words.Because of the determinism, the running time is often linear or close to linear; most of the time and space resources are spent on deciding the next parser action. Generalizations that allow nondeterminism, while maintaining polynomial running time, were proposed by (Huang and Sagae, 2010; Kuhlmann et al., 2011) .This work has influenced, and has been influenced by, similar developments in constituent parsing. The challenge here is to deterministically choose a shift or reduce action. As in the case of dependency parsing, solutions to this problem are often expressed in terms of classifiers of some kind. Common approaches involve maximum entropy (Ratnaparkhi, 1997; Tsuruoka and Tsujii, 2005) , decision trees (Wong and Wu, 1999; Kalt, 2004) , and support vector machines (Sagae and Lavie, 2005) .The programming-languages community recognized early on that large classes of grammars allow deterministic, i.e. linear-time, parsing, provided parsing decisions are postponed as long as possible. This has led to (deterministic) LR(k) parsing (Knuth, 1965; Sippu and Soisalon-Soininen, 1990) , which is a form of shift-reduce parsing. Here the parser needs to commit to a grammar rule only after all input covered by the right-hand side of that rule has been processed, while it may consult the next k symbols (the lookahead). LR is the optimal, i.e. most deterministic, parsing strategy that has this property. Deterministic LR parsing has also been considered relevant to psycholinguistics (Shieber, 1983) .Nondeterministic variants of LR(k) parsing, for use in natural language processing, have been proposed as well, some using tabulation to ensure polynomial running time in the length of the input string (Tomita, 1988; Billot and Lang, 1989) . However, nondeterministic LR(k) parsing is potentially as expensive as, and possibly more expensive than, traditional tabular parsing algorithms such as CKY parsing (Younger, 1967; Aho and Ullman, 1972) , as shown by for example (Shann, 1991) ; greater values of k make matters worse (Lankhorst, 1991) . For this reason, LR parsing is sometimes enhanced by attaching probabilities to transitions (Briscoe and Carroll, 1993) , which allows pruning of the search space (Lavie and Tomita, 1993) . This by itself is not uncontroversial, for several reasons. First, the space of probability distributions expressible by a LR automaton is incomparable to that expressible by a CFG (Nederhof and Satta, 2004) . Second, because an LR automaton may have many more transitions than rules, more training data may be needed to accurately estimate all parameters.The approach we propose here retains some important properties of the above work on LR parsing. First, parser actions are delayed as long as possible, under the constraint that a rule is committed to no later than when the input covered by its right-hand side has been processed. Second, the parser action that is performed at each step is the most likely one, given the left context, the lookahead, and a probability distribution over parses given by a PCFG.There are two differences with traditional LR parsing however. First, there is no explicit representation of LR states, and second, probabilities of actions are computed dynamically from a PCFG rather than retrieved as part of static transitions. In particular, this is unlike some other early approaches to probabilistic LR parsing such as (Ng and Tomita, 1991) .The mathematical framework is reminiscent of that used to compute prefix probabilities (Jelinek and Lafferty, 1991; Stolcke, 1995) . One major difference is that instead of a prefix string, we now have a stack, which does not need to be parsed. In the first instance, this seems to make our problem easier. For our purposes however, we need to add new mechanisms in order to take lookahead into consideration.It is known, e.g. from (Cer et al., 2010; Candito et al., 2010) , that constituent parsing can be used effectively to achieve dependency parsing. It is therefore to be expected that our algorithms can be used for dependency parsing as well. The parsing steps of shift-reduce parsing with a binary grammar are in fact very close to those of many dependency parsing models. The major difference is, again, that instead of general-purpose classifiers to determine the next step, we would rely directly on a PCFG.The emphasis of this paper is on deriving the necessary equations to build several variants of deterministic shift-reduce parsers, all guided by a PCFG. We also offer experimental results. | 0 |
Neural sequence-to-sequence models excel at learning inflectional paradigms from incomplete input (Table 1 shows an example inflection problem.) These models, originally borrowed from neural machine translation (Bahdanau et al., 2014) , read in a series of input tokens (e.g. characters, words) and output, or translate, them as another series. Although these models have become adept at mapping input to output sequences, like all neural models, they are relatively uninterpretable. We present a novel error analysis technique, based on previous systems for learning to inflect which relied on edit rule induction (Durrett and DeNero, 2013) . By using this to interpret the output of a neural model, we can group errors into linguistically salient classes such as producing the wrong case form or incorrect inflection class.Our broader linguistic contribution is to reconnect the inflection task to the descriptive literature on morphological systems. Neural models for inflection are now being applied as cognitive models of human learning in a variety of settings (Malouf, 2017; Silfverberg and Hulden, 2018; Kirov and Cotterell, 2018, and others) . They are appealing cognitive models partly because of their high performance on benchmark tasks (Cotterell et al., 2016, and subsq.) , and also because they make few assumptions about the morphological system they are trying to model, dispensing with overly restrictive notions of segmentable morphemes and discrete inflection classes. But while these constructs are theoretically troublesome, they are still important for describing many commonly-studied languages; without them, it is relatively difficult to discover what a particular model has and has not learned about a morphological system. This is often the key question which prevents us from using a general-purpose neural network system as a cognitive model (Gulordava et al., 2018) . Our error analysis allows us to understand more clearly how the sequence-to-sequence model diverges from human behavior, giving us new information about its suitability as a cognitive model of the language learner.As a case study, we apply our error analysis technique to Russian, one of the lowestperforming languages in SIGMORPHON 2016. We find a large class of errors in which the model incorrectly selects among lexically-or semantically-conditioned allomorphs. Russian has semantically-conditioned allomorphy in nouns and adjectives, and lexically-conditioned allomorphy (inflection classes) in nouns and verbs (Timberlake, 2004) ; Section 3 gives a brief introduction to the relevant phenomena. While these facts are commonly known to linguists, their importance to modeling the inflection task has not previously Source Features Target ABAŠ pos=N, case=NOM, num=SG ABAŠ JATAGAN pos=N, case=INS, num=PL JATAGANAMI Table 1 : An example inflection problem: the task is to map the Source and Features to the correct, fully inflected Target. been pointed out. Section 4 shows that these phenomena account for most of Russian's increased difficulty relative to the other languages. In Section 6, we provide lexical-semantic information to the model, decreasing errors due to semantic conditioning of nouns by 64% and of verbs by 88%. | 0 |
Les voyelles nasales, du fait de leur réalisation articulatoire impliquant le couplage de trois systèmes de résonance, donnent lieu à un signal acoustique riche et complexe. Dans ce cadre, il y a lieu de s'interroger sur la perception de cette classe de phonèmes auprès des personnes atteintes de surdité, dont l'accès à l'information acoustique est limité, la lecture labiale ne permettant pas de distinguer la nasalité vocalique (Borel, 2015) . En ce qui concerne plus particulièrement la population atteinte de surdité et porteuse d'un implant cochléaire, il y a lieu, qui plus est, de se questionner sur la représentativité du signal émis par le dispositif -celui-ci étant composé de 12 à 22 électrodes de transmission contre des milliers de cellules ciliées dans l'oreille saine. Hawks, Fourakis, Skinner Holden & Holden (1997) ont notamment mis en évidence des difficultés à discriminer des voyelles orales aux bandes passantes plus larges auprès d'une population implantée. Borel (2015) a centré différentes recherches sur la perception de la nasalité auprès d'une population adulte implantée (76 implantations unilatérales, 6 implantations bilatérales), au sein de différents paradigmes expérimentaux. Les 82 sujets testés ont présenté, dans une tâche d'identification phonémique, des performances significativement inférieures à celles de leurs pairs entendants en ce qui concerne la perception des voyelles nasales, qu'ils perçoivent comme des voyelles orales, et ce même après un délai post-implantation d'un an. Dans son étude de 2018, Crouzet a investigué la perception des consonnes et voyelles nasales auprès de 19 participants normo-entendants, en traitant les phonèmes utilisés via un vocodeur permettant de simuler la déformation sonore liée à l'implant en faisant varier différents paramètres permettant de moduler la résolution spectrale du son. Les résultats sont très marqués : tandis que les sujets ne présentent pas de difficulté accrue à traiter la nasalité au sein des consonnes -avec une tendance à l'amélioration des performances avec l'augmentation de la résolution spectrale -les voyelles nasales sont significativement moins bien perçues, quel que soit le paramétrage sélectionné pour les synthétiser. Ces différentes études corroborent l'hypothèse de difficultés accrues dans la perception des voyelles nasales chez les individus porteurs d'implant(s) cochléaire(s). Toutefois, nous notons l'absence d'études portant sur les compétences de production de cette distinction entre voyelles orales et nasales chez les individus porteurs d'implant(s) cochléaire(s) : est-ce que les déficits perceptifs se manifestent par une distinction moins marquée des voyelles orales/nasales en production ? De plus, la perception de la distinction des voyelles orales/nasales n'a été évaluée que chez l'adulte implanté. Qu'en est-il de l'enfant atteint de surdité pré-linguale porteur d'implants cochléaires ?Par ailleurs, les études de la perception du son au travers d'un vocodeur par des sujets normoentendants, outre le fait qu'elles peuvent ne pas représenter précisément l'input auditif perçu au travers de l'implant, ne permettent pas de rendre compte des stratégies perceptuelles mises en place par la population implantée. L'étude décrite ci-après vise à répondre à ces interrogations, en investiguant les capacités de perception et de production de voyelles orales et nasales auprès d'enfants atteints de surdité pré-linguales et porteurs d'implants cochléaires. | 0 |
Word embeddings have enjoyed a surge of popularity in natural language processing (NLP) due to the effectiveness of deep learning and the availability of pretrained, downloadable models for embedding words. Many embedding models have been developed (Collobert et al., 2011; Mikolov et al., 2013; Pennington et al., 2014) and have been shown to improve performance on NLP tasks, including part-of-speech (POS) tagging, named entity recognition, semantic role labeling, dependency parsing, and machine translation (Turian et al., 2010; Collobert et al., 2011; Bansal et al., 2014; Zou et al., 2013) .The majority of this work has focused on a single embedding for each word type in a vocabulary. 1 We will refer to these as type embed-dings. However, the same word type can exhibit a range of linguistic behaviors in different contexts. To address this, some researchers learn multiple embeddings for certain word types, where each embedding corresponds to a distinct sense of the type (Reisinger and Mooney, 2010; Huang et al., 2012; Tian et al., 2014) . But token-level linguistic phenomena go beyond word sense, and these approaches are only reliable for frequent words.Several kinds of token-level phenomena relate directly to NLP tasks. Word sense disambiguation relies on context to determine which sense is intended. POS tagging, dependency parsing, and semantic role labeling identify syntactic categories and semantic roles for each token. Sentiment analysis and related tasks like opinion mining seek to understand word connotations in context.In this paper, we develop and evaluate models for embedding word tokens. Our token embeddings capture linguistic characteristics expressed in the context of a token. Unlike type embeddings, it is infeasible to precompute and store all possible (or even a significant fraction of) token embeddings. Instead, our token embedding models are parametric, so they can be applied on the fly to embed any word in its context. We focus on simple and efficient token embedding models based on local context and standard neural network architectures. We evaluate our models by using them to provide features for downstream low-resource syntactic tasks: Twitter POS tagging and dependency parsing. We show that token embeddings can improve the performance of a non-structured POS tagger to match the state of the art Twitter POS tagger of Owoputi et al. (2013) . We add our token embeddings to Tweeboparser (Kong et al., 2014) , improving its performance and establishing a new state of the art for Twitter dependency parsing. | 0 |
Relation classification focuses on classifying the semantic relations between pairs of marked entities in given sentences (Hendrickx et al., 2010) . It is a fundamental task which can serve as a pre-existing system and provide prior knowledge for information ex-traction, natural language understanding, information retrieval, etc. However, automatic recognition of semantic relation is challenging. Traditional feature based approaches rely heavily on the quantity and quality of hand-crafted features and lexical resources, and it is time-consuming to select an optimal subset of relevant features in order to maximize performance. Though kernel based methods get rid of the feature selection process, they need elaborately designed kernels and are also computationally expensive.Recently, with the renaissance of neural network, deep learning techniques have been adopted to provide end-to-end solutions for many classic NLP tasks, such as sentence modeling (Socher, 2014; Kim, 2014) and machine translation (Cho et al., 2014) . Recursive Neural Network (RNN) (Socher et al., 2012) and Convolutional Neural Network (CNN) (Zeng et al., 2014) have proven powerful in relation classification. In contrast to traditional approaches, neural network based methods own the ability of automatic feature learning and alleviate the problem of severe dependence on human-designed features and kernels.However, previous researches (Socher et al., 2012) imply that some features exploited by traditional methods are still informative and can help enhance the performance of neural network in relation classification. One simple but effective approach is to concatenate lexical level features to features extracted by neural network and directly pass the combined vector to classifier. In this way, Socher et al. (2012) , Liu et al. (2015) achieve better performances when considering some external features produced by existing NLP tools. Another more sophisticated method adjusts the structure of neural network according to the parse trees of input sentences. The results of (Li et al., 2015) empirically suggest syntactic structures from recursive models might offer useful power in relation classification. Besides relation classification, parse tree also gives neural network a big boost in other NLP tasks Tai et al., 2015) . Dependency parse tree is valuable in relation classification task. According to our observation, dependency tree usually shortens the distances between pairs of marked entities and helps trim off redundant words, when comparing with plain text. For example, in the sentence shown in Figure 1 , two marked entities span the whole sentence, which brings much noise to the recognition of their relation. By contrast, in the dependency tree corresponding to the sentence, the path between two marked entities comprises only four words and extracts a key phrase "caused by" that clearly implies the relation of entities. This property of dependency tree is ubiquitous and consistent with the Shortest Path Hypothesis which is accepted by previous studies Xu et al., 2015a; Xu et al., 2015b) .To better utilize the powerful neural network and make the best of the abundant linguistic knowledge provided by parse tree, we propose a position encoding convolutional neural network (PECNN) based on dependency parse tree for relation classification. In our model, to sufficiently benefit from the important property of dependency tree, we introduce the position feature and modify it in the context of parse tree. Tree-based position features encode the relative positions between each word and marked entities in a dependency tree, and help the network pay more attention to the key phrases in sentences. Moreover, with a redefinition of "context", we design two kinds of tree-based convolution kernels for capturing the structural information and salient features of sentences.To sum up, our contributions are:1)We propose a novel convolutional neural network with tree-based convolution kernels for relation classification.2) We confirm the feasibility of transferring the position feature from plain text to dependency tree, and compare the performances of different position features by experiments.3) Experimental results on the benchmark dataset show that our proposed method outperforms the state-of-the-art approaches. To make the mechanism of our model clear, we also visualize the influence of tree-based position feature on relation classification task. | 0 |
Essentially, Chinese is a kind of paratactic language, rather than hypotactic language. This makes it character based, not word based. However, words are the basic linguistic units of natural language. Thus, the identification of lexical words or the delimitation of words in running texts is a prerequisite in Chinese natural language processing (NLP).Chinese word segmentation can be cast as simple and effective formulation of character sequence labeling. A number of recent papers have examined this problem (Zhang et al., 2003; Xue, 2003; Peng et al., 2004) and could provide relatively good performance. However, these systems are genre or domain specific and use many different segmentation guidelines derived from the training dataset. This characteristic guarantees these systems with good performance on the known words, yet severely deteriorates on unknown words 1 In simultaneous design, most researchers bind to the theory of transfer learning (or multitask learning, Caruana, 1997) , and believe it achieves from relatively unfamiliar context. This constitutes the major drawback of supervised segmentation.In contrast, unsupervised approaches are model-free and more adaptive to unfamiliar context. This provides a potential solution for identify unknown words and have been attracting more attention recent years (Sproat and Shih, 1990; Goldwater et al., 2006; Mochihashi et al., 2009) .Since super and unsupervised methods excel in different situations, a natural idea would be a combination of these two to overcome drawbacks of both. A myriad of attempts exist and can be roughly categorized into two groups: simultaneous and asynchronous manner. more when all the tasks are solved together. Admitted, this may be true in some situations (Gao et al., 2005; Tou Ng and Low. 2004) . However, these achievements are often gained in the cost of complex system design. On the other side, asynchronous system moderate well between performance and simplicity. Thus, it is more favorable for large data processing, especially when real time analysis is primal.In this paper, we report the integrated system designed for CLP2012 Micro-blog word segmentation subtask 2 | 0 |
Freeform data driven Natural Language Generation (NLG) is a topic explored by academics and artists alike, but motivating its empirical study is a difficult task. While many language models used in statistical NLP are generative and can easily produce sample sentences by running their "generative mode", if all that is required is a plausible sentence one might as well pick a sentence at random from any existing corpus.NLG becomes useful when constraints exist such that only certain sentences are valid. The majority of NLG applies a semantic constraint of "what to say", producing sentences with communicative goals. Other work such as ours investigates constraints in structure; producing sentences of a certain form without concern for their specific meaning.We study two constraints concerning the words that are allowed in a sentence. The first sets a fixed vocabulary such that only sentences where all words are in-vocab are allowed. The second demands not only that all words are in-vocab, but also requires the inclusion of a specific word somewhere in the sentence.These constraints are natural in the construction of language education exercises, where students have small known vocabularies and exercises that reinforce the knowledge of arbitrary words are required. To provide an example, consider a Chinese teacher composing a quiz that asks students to translate sentences from English to Chinese. The teacher cannot ask students to translate words that have not been taught in class, and would like ensure that each vocabulary word from the current book chapter is included in at least one sentence.Using a system such as ours, she could easily generate a number of usable sentences that contain a given vocab word and select her favorite, repeating this process for each vocab word until the quiz is complete.The construction of such a system presents two primary technical challenges. First, while highly parameterized models trained on large corpora are a good fit for data driven NLG, sparsity is still an issue when constraints are introduced. Traditional smoothing techniques used for prediction based tasks are inappropriate, however, as they liberally assign probability to implausible text. We investigate smoothing techniques better suited for NLG that smooth more precisely, sharing probability only between words that have strong semantic connections.The second challenge arises from the fact that both vocabulary and word inclusion constraints are easily handled with a rejection sampler that repeatedly generates sentences until one that obeys the constraints is produced. Unfortunately, for models with a sufficiently wide range of outputs the computation wasted by rejection quickly becomes prohibitive, especially when the word inclusion constraint is applied. We define models that sample directly from the possible outputs for each constraint without rejection or backtracking, and closely approximate the distribution of the true rejection samplers.We contrast several generative systems through both human and automatic evaluation. Our best system effectively captures the compositional nature of our training data, producing error-free text with nearly 80 percent accuracy without wasting computation on backtracking or rejection. When the word inclusion constraint is introduced, we show clear empirical advantages over the simple solution of searching a large corpus for an appropriate sentence. | 0 |
Constituency is a foundational building block for phrase-structure grammars. It captures the notion of what tokens can group together and act as a single unit. The motivating insight behind this paper is that constituency may be reflected in markups of bracketings that people provide in doing natural tasks. We term these segments naturallyoccurring bracketings for their lack of intended syntactic annotation. These include, for example, the segments people pick out from sentences to refer to other Wikipedia pages or to answer semanticallyoriented questions; see Figure 1 for an illustration.Gathering such data requires low annotation expertise and effort. On the other hand, these data are not necessarily suitable for training parsers, as they often contain incomplete, incorrect and sometimes conflicting bracketing information. It is thus an empirical question whether and how much we 1 Our code is publicly available at https://github. com/tzshi/nob-naacl21. Science fiction (sometimes shortened to sci-fi or SF) is a genre of speculative fiction that typically deals with imaginative and futuristic concepts such as advanced science and technology, space exploration, time travel, parallel universes, and extraterrestrial life.Republicans have been imploring the White House to compromise on the wage issue.Q&As:Corresponding bracketings: Figure 1 : Two example types of naturally-occurring bracketings. Blue underlined texts in the Wikipedia sentence are hyperlinks. We bracket the QA-SRL sentence in matching colors according to the answers. could learn syntax from these naturally-occurring bracketing data.To overcome the challenge of learning from this kind of noisy data, we propose to train discriminative constituency parsers with structured ramp loss (Do et al., 2008) , a technique previously adopted in machine translation (Gimpel and Smith, 2012) . Specifically, we propose two loss functions to directly penalize predictions in conflict with available partial bracketing data, while allowing the parsers to induce the remaining structures.We experiment with two types of naturallyoccurring bracketing data, as illustrated in Figure 1 . First, we consider English question-answer pairs collected for semantic role labeling (QA-SRL; He et al., 2015) . The questions are designed for non-experts to specify semantic arguments of predicates in the sentences. We observe that although no syntactic structures are explicitly asked for, humans tend to select constituents in their answers. Second, Wikipedia articles 2 are typically richly annotated with internal links to other articles. These links are marked on phrasal units that refer to standalone concepts, and similar to the QA-SRL data, they frequently coincide with syntactic constituents.Experiment results show that naturally-occurring bracketings across both data sources indeed help our models induce syntactic constituency structures. Training on the QA-SRL bracketing data achieves an unlabeled F1 score of 68.9 on the English WSJ corpus, an accuracy competitive with state-of-theart unsupervised constituency parsers that do not utilize such distant supervision data. We find that our proposed two loss functions have slightly different interactions with the two data sources, and that the QA-SRL and Wikipedia data have varying coverage of phrasal types, leading to different error profiles.In sum, through this work, (1) we demonstrate that naturally-occurring bracketings are helpful for inducing syntactic structures, (2) we incorporate two new cost functions into structured ramp loss to train parsers with noisy bracketings, and (3) our distantly-supervised models achieve results competitive with the state of the art of unsupervised constituency parsing despite training with smaller data size (QA-SRL) or out-of-domain data (Wikipedia). | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.