text
stringlengths 4
222k
| label
int64 0
4
|
---|---|
Natural language processing for the legal domain has its own unique challenges. This is due to the way legal documents are structured as well as to the domain-specific language being used. Technology dealing with legal documents has received increased attention in recent years. This can be seen from the number of recent scientific papers being published, the existence of the Natural Legal Language Processing (NLLP) workshop (Aletras et al., 2019 (Aletras et al., , 2020 and different international projects dealing with natural language processing for the legal domain.Named entity recognition (NER) is the process of identifying text spans that refer to real-world objects, such as organizations or persons, etc. One of the annotation schemes being used in a large number of works was introduced by the CoNLL-2003 shared task on language independent NER (Tjong Kim Sang and De Meulder, 2003) and refers to names of persons, organizations and locations. This annotation scheme can be applied in the legal domain as well, thus allowing existing systems to try to annotate legal documents (with or without being adapted to legal text). However, domainspecific entities are usually added to enhance the in-formation extraction capabilities of text processing algorithms specifically designed for the legal domain. Dozier et al. (2010) , while dealing with depositions, pleadings and trial-specific documents, propose including entities for attorneys, judges, courts and jurisdictions. Glaser et al. (2018) proposed adding the entities date, money value, reference and "other" for analyzing legal contracts. Leitner et al. (2019 Leitner et al. ( , 2020 proposed using 7 coarse-grained entity classes which can be further expanded into 19 fine-grained classes.In the context of the "Multilingual Resources for CEF.AT in the legal domain" (MARCELL) project 1 a large, clean, validated domain-specific corpus was created. It contains monolingual corpora extracted from national legislation (laws, decrees, regulations, etc.) of the seven involved countries, including Romania (Tufis , et al., 2020) . All seven corpora are aligned at topic level domains. The Romanian corpus was preprocessed (split at sentence level, tokenized, lemmatized and annotated with POS tags) using tools developed in the Research Institute for Artificial Intelligence "Mihai Drȃgȃnescu", Romanian Academy (RACAI). Named entities were identified using a general-purpose tool (Pȃis , , 2019) . This tool was designed for general Romanian language and allowed only four entity types: organizations, locations, persons and time expressions. The tool was not trained on any legal texts.For the purposes of this work, we created a manually annotated corpus, comprising legal documents extracted from the larger MARCELL-RO corpus. We choose an annotation scheme covering 5 entity classes: person (PER), location (LOC), organization (ORG), time expressions (TIME) and legal document references (LEGAL). References are introduced similar to the work of Landthaler et al. (2016) and the coarse-grained class proposed by Leitner et al. (2019) , without additional sub-classes.Thus, they are references to legal documents such as laws, ordinances, government decisions, etc. For the purposes of this work, in the Romanian legal domain, we decided to explore only these coarsegrained classes, without any fine-grained entities. This has the advantage of allowing the corpus to be used together with other general-purpose NER corpora. Furthermore, it allows us to judge the quality of the resulting NER system against existing systems. In order to train domain-specific NER systems, we constructed distributional representations of words (also known as word embeddings) based on the entire MARCELL corpus. Finally, we explored several neural architectures and adapted them as needed to the legal domain.This paper is organized as follows: in Section 2 we present related work, in Section 3 is described the LegalNERo corpus, Section 4 presents the legal domain word embeddings, Section 5 describes the NER system architecture, while Section 6 gives the results and finally conclusions are presented in Section 7. | 0 |
Rather than matching texts in the bag-of-words space, Dense Retrieval (DR) methods first encode texts into a dense embedding space Xiong et al., 2021) and then conduct text retrieval using efficient nearest neighbor search (Chen et al., 2018; Guo et al., 2020; Johnson et al., 2021) . With pre-trained language models and dedicated fine-tuning techniques, the learned representation space has significantly advanced the first stage retrieval accuracy of many language systems, including web search (Xiong et Figure 1 : T-SNE plots of embedding space of a BERT reranker for q-d pairs (left) and ANCE dense retriever for queries/documents (right). Both models are trained on web search and transferred to medical search. 2021), grounded generation , open domain question answering Izacard and Grave, 2020) , etc.Purely using the learned embedding space for retrieval has raised concerns on the generalization ability, especially in scenarios without dedicated supervision signals. Many have observed diminishing advantages of DR models in various datasets if they are not fine-tuned with task-specific labels, i.e., in the zero-shot setup (Thakur et al., 2021) . However, in many scenarios outside commercial web search, zero-shot is the norm. Obtaining training labels is difficult, expensive, and sometimes infeasible, especially in special domains (e.g., medical) where annotation requires strong expertise or is even prohibited because of privacy constraints. The lack of zero-shot ability hinders the democratization of advancements in dense retrieval from data-rich domains to everywhere else. Many equally, if not more important, real-world search scenarios still rely on unsupervised exact match methods that have been around for decades, e.g., BM25 (Robertson and Jones, 1976) .Within the search pipeline, the generalization of first stage DR models is notably worse than subsequent reranking models (Thakur et al., 2021) . Reranking models, similar to many classification models, only require a decision boundary between relevant and irrelevant query-document pairs (q-d pairs) in the representation space. In comparison, DR needs good local alignments across the entire space to support nearest neighbor matching, which is much harder to learn.In Figure 1 , we use t-SNE (van der Maaten and Hinton, 2008) to illustrate this difference. We show learned representations of a BERT-based reranker (Nogueira and Cho, 2019) and a BERTbased dense retriever (Xiong et al., 2021) , in zeroshot transfer from web (Bajaj et al., 2016) to medical domain (Voorhees et al., 2021) . The representation space learned for reranking yields two manifolds with a clear decision boundary; data points in the target domain naturally cluster with their corresponding classes (relevant or irrelevant) from the source domain, leading to good generalization. In comparison, the representation space learned for DR is more scattered. Target domain data points are grouped separately from those of the source domain; it is much harder for the learned nearest neighbor locality to generalize from source to the isolated target domain region.In this paper, we present Momentum Adversarial Domain Invariant Representations learning (MoDIR), to improve the accuracy of zero-shot dense retrieval (ZeroDR). We first introduce an auxiliary domain classifier that is trained to discriminate source embeddings from target ones. Then the DR encoder is not only updated to encode queries and relevant documents together in the source domain, but also trained adversarially to confuse the domain classifier and to push for a more domain invariant embedding space. To ensure stable and efficient adversarial learning, we propose a momentum method that trains the domain classifier with a momentum queue of embeddings saved from previous iterations.Our experiments evaluate the generalization ability of dense retrieval with MoDIR using 15 retrieval tasks from the BEIR benchmark (Thakur et al., 2021) . On these retrieval tasks from various domains including biomedical, finance, scientific, etc., MoDIR improves the zero-shot accuracy of two standard models, DPR and ANCE (Xiong et al., 2021) . On tasks where evaluation labels have sufficient coverage for DR (Thakur et al., 2021) , MoDIR's improvements are robust and significant, despite not using any target domain training labels. We also verify the necessity of the proposed momentum approach, without which the domain classifier fails to capture the domain gaps, and the adversarial training does not learn domain invariant representations, resulting in little improvement in ZeroDR.We conduct further analyses to reveal interesting properties of MoDIR and its learned embedding space. During the adversarial training process, the target domain embeddings are gradually pushed towards the source domain and eventually absorbed as a subgroup of the source. In the learned representation space, our manual examinations find various cases where a target domain query is located close to source queries with similar information needs. This indicates that ZeroDR's generalization ability comes from the combination of information overlaps of source/target domains, and MoDIR's ability to identify the right correspondence between them. | 0 |
The centering model has evolved as a methodology for the description and explanation of the local coherence of discourse (Grosz et al., 1983; 1995) , with focus on pronominal and nominal anaphora. Though several cross-linguistic studies have been carded out (cf. the enumeration in Grosz et al. (1995) ), an almost canonical scheme for the ordering on the forward-looking centers has emerged, one that reflects well-known regularities of fixed word order languages such as English. With the exception of Walker et al. (1990; 1994) for Japanese, Turan (1995) for Turkish, Rambow (1993) for German and Cote (1996) for English, only grammatical roles are considered and the (partial) ordering in Table 11 is taken for granted. I subject > dir-object > indir-object I > complement(s) > adjunct(s) Table 1 : Grammatical Role Based Ranking on the C! ~Table 1 contains the most explicit ordering of grammatical roles we are aware of and has been taken from Brennan et al. (1987) . Often, the distinction between complements and adjuncts is collapsed into the category "others" (c.f., e.g., Grosz et al. (1995) ).Our work on the resolution of anaphora (Strube & Hahn, 1995; Hahn & Strube, 1996) and textual ellipsis (Hahn et al., 1996) , however, is based on German, a free word order language, in which grammatical role information is far less predictive for the organization of centers. Rather, for establishing proper referential relations, the functional information structure of the utterances becomes crucial (different perspectives on functional analysis are brought forward in Dane~ (1974b) and Dahl (1974) ). We share the notion of functional information structure as developed by Dane~ (1974a) . He distinguishes between two crucial dichotomies, viz. given information vs. new information (constituting the information structure of utterances) on the one hand, and theme vs. rheme on the other (constituting the thematic structure of utterances; cf. Halliday & Hasan (1976, pp.325-6) ). Dane~ refers to a definition given by Halliday (1967) to avoid the confusion likely to arise in the use of these terms: " [...] while given means what you were talking about (or what I was talking about before), theme means what I am talking about (now) [...]" Halliday (1967, p.212) . Dane~ concludes that the distinction between given information and theme is justified, while the distinction between new information and rheme is not. Thus, we arrive at a trichotomy between given information, theme and rheme (the latter being equivalent to new information). We here subscribe to these considerations, too, and will return in Section 3 to these notions in order to rephrase them more explicitly by using the terminology of the centering model.In this paper, we intend to make two contributions to the centering approach. The first one, the introduction of functional notions of information structure in the centering model, is methodological in nature. The second one concerns an empirical issue in that we demonstrate how a functional model of centering can successfully be applied to the analysis of several forms of anaphoric text phenomena.At the methodological level, we develop arguments that (at least for free word order languages) grammatical role indicators should be replaced by functional role patterns to more adequately account for the ordering of discourse entities in center lists. In Section 3 we elaborate on the particular information structure criteria underlying a function-based center ordering. We also make a second, even more general methodological claim for which we have gathered some preliminary, though still not conclusive evidence. Based on a re-evaluation of empirical arguments discussed in the literature on centering, we stipulate that exchanging grammatical by functional criteria is also a reasonable strategy for fixed word order languages. Grammatical role constraints can indeed be rephrased by functional ones, which is simply due to the fact that grammatical roles and the information structure patterns, as we define them, coincide in these kinds of languages. Hence, the proposal we make seems more general than the ones currently under discussion in that, given a functional framework, fixed and free word order languages can be accounted for by the same ordering principles. As a consequence, we argue against Walker et al.'s (1994, p.227) stipulation, which assumes that the C I ranking is the only parameter of the centering theory which is language-dependent. Instead, we claim that functional centering constraints for the C! ranking are possibly universal.The second major contribution of this paper is related to the unified treatment of specific text phenomena. It consists of an equally balanced treatment of intersentential (pro)nominal anaphora and textual ellipsis (also called functional or partial anaphora). The latter phenomenon (cf. the examples given in the next section), in particular, is usually only sketchily dealt with in the centering literature, e.g., by asserting that the entity in question "is realized but not directly realized" (Grosz et al., 1995, p.217) . Furthermore, the distinction between those two kinds of realization is generally delegated to the underlying semantic theory. We will develop arguments how to locate elliptical discourse entities and resolve textual ellipsis properly at the center level. The ordering constraints we supply account for all of the above mentioned types of anaphora in a precise way, including (pro)nominal anaphora (Strube & Hahn, 1995; Hahn & Strube, 1996) . This claim will be validated by a substantial body of empirical data (cf. Section 4). | 0 |
Grammatical gender is a categorization of nouns in certain languages which forms a basis for agreement with related words in sentences, and plays an important role in disambiguation and correct usage (Ibrahim, 2014) . An estimated third of the current world population are native speakers of gendered languages, and over one-sixth are L2 speakers. Having a gender assigned to nouns can potentially affect how the speakers think about the world (Samuel et al., 2019) . A systematic study of rules governing these assignments can point to the origin of and potentially help mitigate gender biases, and improve gender-based inclusivity (Sexton, 2020) .Grammatical gender (hereon referred to by gender) need not coincide with "natural gender", which can make language acquisition more challenging. For example, Irish cailín (meaning "girl") is assigned a masculine gender. Works investigating the role of gender in acquiring a new language * Equal contribution (Sabourin et al., 2006; Ellis et al., 2012) have found that the speakers of a language with grammatical gender have an advantage when acquiring a new gendered language. Automated generation of simple rules for assigning gender can be helpful for L2 learners, especially when L1 is genderless.Tools for understanding predictions of statistical models, for example variable importance analysis of Friedman (2001) , have been used even before the widespread use of black-box neural models. Recently the interest in such tools, reformulated as explainability in the neural context (Guidotti et al., 2018) , has surged, with a corresponding development of a suite of solutions (Bach et al., 2015; Sundararajan et al., 2017; Shrikumar et al., 2017; Lundberg and Lee, 2017) . These approaches typically explain the model prediction by attributing it to relevant bits in the input encoding. While faithful to the black box model's "decision making", the explanations obtained may not be readily intuited by human users. Surrogate models, which globally approximate the model predictions by a more interpretable model, or obtain prediction-specific explanations by perturbing the input in domainspecific ways, have been introduced to remedy this problem (Ribeiro et al., 2016; Molnar, 2019) .We consider a novel surrogate approach to explainability, where we map the feature embedding learned by the black box models to an auxiliary space of explanations. We contend that the best way to arrive at a decision (prediction) may not necessarily be the best way to explain it. While prior work is largely limited to the input encodings, by designing a set of auxiliary attributes we can provide explanations at desired levels of complexity, which could (for example) be made to suit the language learner's ability in our motivating setting. Our techniques overcome issues in prior art in our setting and are completely language-independent, with potential for use in broader natural language processing and other deep learning explanations.For illustration, we examine French in detail where the explanations require both meaning and form. | 0 |
This paper describes a preliminary reading comprehension system constructed as a semester-long project for a natural language processing course. This was the first exposure to this material for all but one student, and so much of the semester was spent learning about and constructing the tools that would be needed to attack this comprehensive problem. The course was structured around the project of building a question answering system following the HumSent evaluation as used by the Deep Read system (Hirschman eta] ., 1999). The Deep Read reading comprehension prototype system (Hirschman et al., 1999) achieves a level of 36% of the answers correct using a bag-of-words approach together with limited linguistic processing. Since the average number of sentences per passage is 19.41, this performance is much better than chance (i.e., 5%). We hypothesized that by using a combination of syntactic and semantic features and machine learning techniques, we could improve the accuracy of question answering on the test set of the Remedia corpus over these reported levels. | 0 |
The proposed task, SemEval-2022 Task 5 Multimedia Automatic Misogyny Identification (MAMI) (Fersini et al., 2022) consists in the identification of misogynous memes in English language, taking advantage of both text and images available as a source of information.Overall, our proposed method consists of a multimodal approach combining different features (e.g., logits, probabilities, embeddings) of a text Transformer-based model and an image CNN model in a late fusion approach. This late fusion step implies that both models are trained and finetuned separately to the task. Then, the features from each model are concatenated and jointly used as input for a final classifier that combines their knowledge to obtain a final prediction (see Figure 1 ). Different preprocessing steps, text and image models, concatenated feature combinations, and classifiers are explored to obtain the final multimodal architecture.Our presented method has been developed for sub-task A and B from the MAMI competition independently. sub-task A consists of misogynous meme binary classification, where a meme should be categorized either as misogynous or not misogynous. On the other hand, sub-task B requires a more detailed multi-label classification where misogynous content should be recognized among potential overlapping categories such as stereotype, shaming, objectification, and violence.It is noteworthy that our multimodal late fusion method outperforms single models in both subtasks, being more remarkable in complex sub-task B. Similarly, considering that both sub-tasks share the same data, the results of model evaluation on both tasks show how the model trained on complex sub-task B can achieve the same results as a model trained only on binary sub-task A. Therefore, future studies should investigate the complexity and pruning of the required models.This paper provides new insights into information fusion for tackling multimodal tasks, presenting an in-depth exploration of different late fusion approaches and image processing steps. The presented work might help identify malicious be- | 0 |
In the usual sense of syntactic parsing, the analyses of relationships among words and relationships among morphemes are separated. The former is called syntactic analysis (or syntactic parsing) and the latter is called morphological analysis (the term "morphological parsing" exists but it seems to denote a somewhat different notion). Sometimes, however, sublexical analysis is relevant. This is evident in shallow semantic parsing which is understood to be "labeling phrases of a sentence with semantic roles with respect to a target word. For example, the sentence (1) is labeled as (2):" 1)(1) Shaw Publishing offered Mr. Smith a reimbursement last March. The target of the labeling in (2) is offered. Note that the same kind of labeling should be available for the argument structure of reimbursement, which can be illustrated in 3 Clearly, semantic role labeling requires a high-precision predicate-argument analysis of a target predicate, whether the target is a word (e.g., offer) or a morpheme (e.g., reimburse) embedded in a word. The comparison of the two cases shows that a certain kind of parsing is necessary in effective semantic labeling, as Gildea and Palmer (2002) pointed out. But the problem is how to combine the lexical parse by which the predicate-argument structure of offer is recognized with the sublexical parse by which the predicateargument structure of reimburse is recognized. Integrating the two kinds of parses is not a trivial task.This paper presents the idea of Parallel Distributed Parsing (PDP) that is able to straightforwardly carry out the integration. The presentation, however, is theoretically oriented and the content is rather preliminary: no empirical results are presented other than a few sample parses. No parser implementation is available. The main purpose of this presentation is to illustrate a new model for parsing that integrates lexical and sublexical parsings, which I argue can be a remedy for the problem of data sparseness.Data sparseness is a serious problem in natural language processing (NLP) even now that computers can access more raw data than the average human does. The size of textual raw data automatically acquired from the Web exceeds that which a normal human can read in a lifetime. This suggests, however, that no human seems to suffer from data sparseness. What makes this more mysterious is that humans do also employ statistical information in their language processing. The difference between humans and machines, therefore, should lie in the difference in efficiency with which they acquire knowledge, be it is syntactic, semantic or morphological, from linguistic data given. Humans are certainly, at the present, able to acquire knowledge far more efficiently than computers. The crucial question is: How is this possible?I argue that data sparseness is a problem in NLP not only because distributional data itself is sparse, but also because parses availabe today are sparse and inefficient; otherwise, data sparseness should impact human language processing in the same way it does computers. The explanation I consider is that data contains enough information but current technologies fail to extract it due to inefficiency of available parses. PDP is proposed to make parses of linguistic data more efficient and less sparse. In what follows, I show how PDP can remedy the data sparseness.2 Efficient parsing with PDP | 0 |
Recently, dialogue and interactive systems have been emerging with huge commercial values (Qiu et al., 2017; Yan et al., 2016a; Zhang et al., 2017; Zhang et al., 2018b; Zhang et al., 2018a) , especially in the e-commerce field (Cui et al., 2017; Yan et al., 2016b) . Building a chatbot mainly faces two challenges, the lack of dialogue data and poor performance for multi-turn conversations. This paper describes a fine-grained information retrieval (IR) augmented multi-turn chatbot -Lingke. It can learn knowledge without human supervision from conversation records or given product introduction documents and generate proper response, which alleviates the problem of lacking dialogue corpus to train a chatbot. First, by using Apache Lucene 1 to select top 2 sentences most relevant to the question and extracting subject-verb-object (SVO) triples from them, a set of candidate responses is generated. With regard to multi-turn conversations, we adopt a dialogue manager, including self-attention strategy to distill significant signal of utterances, and sequential utterance-response matching to connect responses with conversation utterances, which outperforms all other models in multi-turn response selection. An online demo is available via accessing http://47.96.2.5:8080/ServiceBot/demo/. | 0 |
One of the classic, and still open, tasks of Natural Language Processing (NLP) is Machine Translation (MT). Since its first steps, back in the 1950s (Hutchins and Somers, 1992) , MT has increased its presence in several scenarios providing access to multilingual content. The number of MT initiatives has risen greatly in recent years, mainly in statistical MT, as a result of the availability of vast multilingual parallel texts, but also in rule-based MT, example-based MT or hybrid systems.One such example is the Apertium machine translation engine 1 . Apertium is a transfer-based MT platform which provides the engine, tools and data to perform translations between a large number of language pairs, available under the terms of the GNU General Public License (GNU GPL) 2 , and is being developed by a community of users worldwide. It has been integrated in a wide variety of translation workflows both at public institu-tions like The Open University of Catalonia (Villarejo et al., 2009) and private institutions such as Autodesk (Masselot et al., 2010) .Apertium's basic design is based on the earlier Spanish-Catalan MT system interNOS-TRUM 3 (Canals-Marote et al., 2001) , and the Spanish-Portuguese translator Traductor Universia 4 (Garrido-Alenda et al., 2004) . developed by the Transducens group at the University of Alicante.Apertium is designed to operate as a UNIX pipeline (McIlroy et al., 1978) : each component is implemented as a separate program, which reads and writes a simple text stream. Components can be added or removed as required: typical components in an Apertium pipeline include:• A deformatter which encapsulates the format information in the input as superblanks that will then be seen as blanks between words by the rest of the modules.• A morphological analyser which segments the text in surface forms (words) and delivers, for each of them, one or more lexical forms consisting of lemma, lexical category and morphological inflection information.• A PoS tagger which chooses the most likely lexical form corresponding to an ambiguous surface form.• A lexical transfer module which reads each SL lexical form and delivers the correspond-ing target-language (TL) lexical form by looking it up in a bilingual dictionary.• A morphological generator which delivers a TL surface form for each TL lexical form, by suitably inflecting it.• A post-generator which performs orthographic operations such as contractions (e.g. Spanish del = de + el) and apostrophations (e.g. Catalan l'institut = el + institut).• A reformatter which de-encapsulates any format information.A complete description of the platform can be found in Forcada et al. (2009) .Despite the advances in MT, there has yet to be a system that can produce a perfect translation. For the purpose of assimilation, or the understanding of text, MT can be sufficient; however, for the purpose of dissemination, or the publication of translated material, correction (postediting) by a human editor is typically required. Doyon et al. (2008) divides these edits into two main categories: Full Edits (larger changes, mainly stylistic), and Brief Edits (small changes, mainly syntactic).Automating the process of post-editing is itself an active area of research, both in SMT (Chin and Rosart, 2008) , and in rule-based MT. Llitjós et al. (2004) describes a method of automatically refining rules in a transfer-based system based on user feedback; however, the types of edits permitted are restricted to Brief Edits, while for this work, we aimed at allowing the editor full freedom to edit as they saw fit. | 0 |
Most successful natural language processing (NLP) systems rely on a set of labeled training examples to induce models in a supervised manner. However, labeling instances to create a training set is time-consuming and expensive. One way to alleviate this problem is to resort to active learning (AL), where a learner chooses which instancesfrom a large pool of unlabeled data-to give to the human expert for annotation. After each annotation by the expert, the system retrains the learner, and the learner chooses a new instance to annotate.There are many active learning strategies. The simplest and most widely used is uncertainty sampling (Lewis and Catlett, 1994) , where the learner queries the instance it is most uncertain about (Scheffer and Wrobel, 2001; Culotta and McCallum, 2005) . Instead, in query-by-committee an entire committee of models is used to select the examples with highest disagreement. At the same time most studies on active learning are actually synthetic, i.e. the human supervision was just emulated by holding out already labeled data.In this study, we perform a real active learning experiment. Since speed plays a major role, we do not resort to an ensemble-based query-bycommittee approach but use a single model for selection. We evaluate two selection strategies for a sequence tagging task, supersense tagging.2 Datapoint-selection strategies Given a pool of unlabeled data U, a datapointselection strategy chooses a new unlabeled item u i to annotate. We evaluate two of such strategies. They both involve evaluating the informativeness of unlabeled instances.The first strategy (MAX) is similar to the standard approach in uncertainty sampling, i.e. the active learning system selects datapoint whose classification confidence is the lowest. The second strategy (SAMPLE) attempts to make the selection criterion more flexible by sampling from the confidence score distribution.The two strategies work as follows:1. MAX: Predict on U and choose u i that has the lowest prediction confidence p i , where p i is the posterior probability of the classifier for the item u i . 2. SAMPLE: Predict on U and choose u i sampling from the distribution of the inverse confidence scores for all the instances-making low-confidence items more likely to be sampled. We calculate the inverse confidence score as −log(p i ).We apply both datapoint-selection strategies on two different subcorpora sampled from the same unlabeled corpus (cf. Section 3). Each (strategy, subcorpus) tuple yields a system setup for an individual annotator. Table 1 describes the setup of our four annotators. | 0 |
Hate Speech (HS) against ethnic, religious and national minorities is a growing concern in online discourse (e.g. Foxman & Wolf 2013) , creating a conflict of interest in societies that want to advocate freedom of speech on the one hand, and to protect minorities against defamation on the other (Herz & Molnar 2012) . Social networks, under political pressure to take action, have begun to filter for what they perceive as outright hate speech. Reliable data, actionable definitions and linguistic research is needed for such filtering to be effective without being disproportionate, but also in order to allow policy makers, educational institutions, journalists and other public influencers to understand and counteract the phenomenon of hate speech.The three-year research project behind the work described here is called XPEROHS (Baumgarten et al. 2019) and investigates the expression and perception of online hate speech with a particular focus on immigrant and refugee minorities in Denmark and Germany. In addition to experimental and questionnaire studies, data is collected from two major social networks, Twitter and Facebook. The material is used to examine and quantify linguistic patterns found in hateful discourse and to identify derogatory terms and outright slurs, as well as metaphors used in a demeaning and target-specific way. Finally the corpus is used to provide graded HS examples for the project's empirical work on HS perception. | 0 |
Development of machines with emotional intelligence has been a long-standing goal of AI. With the increasing infusion of interactive systems in our lives, the need for empathetic machines with emotional understanding is paramount. Previous research in affective computing has looked at dialogues as an essential basis to learn emotional dynamics (Sidnell and Stivers, 2012; Poria et al., 2017a; Zhou et al., 2017) .Since the advent of Web 2.0, dialogue videos have proliferated across the internet through platforms like movies, webinars, and video chats. Emotion detection from such resources can benefit numerous fields like counseling (De Choudhury et al., 2013) , public opinion mining , financial forecasting (Xing et al., 2018) , and intelligent systems such as smart homes and chatbots (Young et al., 2018) .In this paper, we analyze emotion detection in videos of dyadic conversations. A dyadic conversation is a form of a dialogue between two entities. We propose a conversational memory network (CMN), which uses a multimodal approach for emotion detection in utterances (a unit of speech bound by breathes or pauses) of such conversational videos.Emotional dynamics in a conversation is known to be driven by two prime factors: self and interspeaker emotional influence (Morris and Keltner, 2000; Liu and Maitlis, 2014) . Self-influence relates to the concept of emotional inertia, i.e., the degree to which a person's feelings carry over from one moment to another (Koval and Kuppens, 2012) . Inter-speaker emotional influence is another trait where the other person acts as an influencer in the speaker's emotional state. Conversely, speakers also tend to mirror emotions of their counterparts (Navarretta et al., 2016) . Figure 1 provides an example from the dataset showing the presence of these two traits in a dialogue.Existing works in the literature do not capitalize on these two factors. Context-free systems infer emotions based only on the current utterance in the conversation (Bertero et al., 2016) . Whereas, state-of-the-art context-based networks like Poria et al., 2017b , use long short-term memory (LSTM) networks to model speaker-based context that suffers from incapability of long-range summarization and unweighted influence from context, leading to model bias.Our proposed CMN incorporates these factors by using emotional context information present in the conversation history. It improves speakerbased emotion modeling by using memory networks which are efficient in capturing long-term dependencies and summarizing task-specific details using attention models (Weston et al., 2014; Graves et al., 2014; Young et al., 2017) .Specifically, the memory cells of CMN are continuous vectors that store the context information found in the utterance histories. CMN also models interplay of these memories to capture interspeaker dependencies.CMN first extracts multimodal features (audio, visual, and text) for all utterances in a video. In order to detect the emotion of a particular utterance, say u i , it gathers its histories by collecting previous utterances within a context window. Separate histories are created for both speakers. These histories are then modeled into memory cells using gated recurrent units (GRUs).After that, CMN reads both the speaker's memories and employs attention mechanism on them, in order to find the most useful historical utterances to classify u i . The memories are then merged with u i using an addition operation weighted by the attention scores. This is done to model inter-speaker influences and dynamics. The whole cycle is repeated for multiple hops and finally, this merged representation of utterance u i is used to classify its emotion category. The contributions of this paper can be summarized as follows:1. We propose an architecture, termed CMN, for emotion detection in a dyadic conversation that considers utterance histories of both the speaker to model emotional dynamics. The architecture is extensible to multi-speaker conversations in formats such as textual dialogues or conversational videos.2. When applied to videos, we adopt a multimodal approach to extract diverse features from utterances. It also makes our model robust to missing information.3. CMN provides a significant increase in accuracy of 3 − 4% over previous state-of-the-art networks. One variant called CMN self which does not consider the inter-speaker relation in emotion detection also outperforms the state of the art by a significant margin.The remainder of the paper is organized as follows: Section 2 provides a brief literature review; Section 3 formalizes the problem statement; Section 4 describes the proposed method in detail; ex-So you're leaving tomorrow. [sad] Yeah, they just called. [sad] I don't know what to say. I don't want to go but I don't have a choice. [sad] I am afraid when you leave you won't come back. [sad] I have to do this. Do you think I want to miss seeing her (their daughter) grow? [sad] You don't have to do this. Its not gonna work out. We're not a complete family without you being here. [sad] Well I ll come back, what are you not willing to wait for me? [ang] Thanks that helps. I feel much better now. [ang] We are not a complete family here, you don't understand. [ang] Person B Person A Time Figure 1 : An abridged dialogue from the dataset. Person A (wife) is leaving B (husband) for a work assignment. Initially both A and B are emotionally driven by their own emotional inertia. In the end, emotional influence can be seen when B, despite being sad, reacts angrily to A's angry statement.perimental results are covered in Section 5; finally, Section 6 provides concluding remarks. | 0 |
Generating referring expressions has recently been extended from the identification of single to sets of objects. However, existing algorithms suffer in terms of efficiency and expressiveness. In this paper, we report on a system that applies a best-first searching procedure, with an enhanced effectiveness and a larger variety of expressions it can generate. The system's repertoire includes compositions of partially identifying expressions and descriptions of objects to be excluded, thereby taking into account impacts on surface forms.Throughout this paper, we refer to a scenario with a set of 12 vehicles as defined in Figure 1 . All vehicles are identifiable individually, to make the identification task meaningful. Only minor differences hold between some of these vehicles, which makes the identification task challenging.This paper is organized as follows. First, we motivate our goals. Then we describe techniques for enhancing efficiency. We follow by illustrating improvements of expressiveness. Finally, we evaluate several efficiency-related techniques. | 0 |
Crypto markets are "online forums where goods and services are exchanged between parties who use digital encryption to conceal their identities" (Martin, 2014) . They are typically hosted on the Tor network, which guarantees anonymization in terms of IP and location tracking. The identity of individuals on a crypto-market is associated only with a username; therefore, building trust on these networks does not follow conventional models prevalent in eCommerce. Interactions on these forums are facilitated by means of text posted by their users. This makes the analysis of textual style on these forums a compelling problem.Stylometry is the branch of linguistics concerned with the analysis of authors' style. Text stylometry was initially popularized in the area of forensic linguistics, specifically to the problems of author profiling and author attribution (Juola, 2006; Rangel et al., 2013) . Traditional techniques for authorship analysis on such data rely upon the existence of long text corpora from which features such as the frequency of words, capitalization, punctuation style, word and character n-grams, function word usage can be extracted and subsequently fed into any statistical or machine learning classification framework, acting as an author's 'signature'. However, such techniques find limited use in short text corpora in a heavily anonymized environment.Advancements in using neural networks for character and word-level modeling for authorship attribution aim to deal with the scarcity of easily identifiable 'signature' features and have shown promising results on shorter text (Shrestha et al., 2017) . Andrews and Witteveen (2019) drew upon these advances in stylometry to propose a model for building representations of social media users on Reddit and Twitter. Motivated by the success of such approaches, we develop a novel methodology for building authorship representations for posters on various darknet markets. Specifically, our key contributions include: First, a representation learning approach that couples temporal content stylometry with access identity (by levering forum interactions via meta-path graph context information) to model and enhance user (author) representation; Second, a novel framework for training the proposed models in a multitask setting across multiple darknet markets, using a small dataset of labeled migrations, to refine the representations of users within each individual market, while also providing a method to correlate users across markets; Third, a detailed drill-down ablation study discussing the impact of various optimizations and highlighting the benefits of both graph context and multitask learning on forums associated with four darknet markets -Black Market Reloaded, Agora Marketplace, Silk Road, and Silk Road 2.0 -when compared to the state-of-the-art alternatives. | 0 |
Data-to-text generation is a task of automatically producing text from non-linguistic input (Gatt and Krahmer, 2018) . The input can be in various forms such as databases of records, spreadsheets, knowledge bases, simulations of physical systems.Traditional methods for data-to-text generation (Kukich, 1983; Reiter and Dale, 2000; Mei et al., 2015 ) implement a pipeline of modules including content planning, sentence planning and surface realization. Recent neural generation systems (Lebret et al., 2016; Wiseman et al., 2017a) are trained in an end-to-end fashion using the very successful encoder-decoder architecture (Bahdanau et al., 2014) as their backbone. Ferreira et al. (2019) introduce a systematic and comprehensive comparison between pipeline and end-to-end architectures for this task and conclude that the pipeline models can generate better texts and generalize better to unseen inputs than end-to-end models.However, with the rapid development of the Seq2Seq models especially pre-trained models, more and more end-to-end architectures based on Seq2Seq paradigm get state-of-the-art results on data-to-text benchmarks nowadays. Although BLEU score (Papineni et al., 2002) , which is based on precision, has been improved dramatically on standard data-to-text benchmarks such as WebNLG , ToTTo (Parikh et al., 2020) and RotoWire (Wiseman et al., 2017b) over the recent years, it is commonly accepted that, compared with human evaluation, BLEU score can not evaluate the models very well. It is too coarse-grained to reflect the different dimensions of the models' performance and not always consistent with human judgment (Novikova et al., 2017a; Reiter, 2018; Sulem et al., 2018) . Moreover, existing human evaluations on data-to-text generation are usually limited in size of samples, numbers of datasets and models, or dimensions of evaluation.In this study, we aim to conduct a thorough and reliable manual evaluation on Seq2Seq-based endto-end data-to-text generation based on multiple datasets and evaluation dimensions. We want to know the pros and cons of different Seq2Seq models on this task, and the factors influencing the generation performance. Particularly, following Multidimensional Quality Metric(MQM) (Mariana, 2014) , similar to the job on summarization evaluation (Huang et al., 2020) , we use 8 metrics on the Accuracy and Fluency aspects to count errors, respectively. Therefore, compared with existing manual evaluation reports, it is more informative and objective.Using this method, we manually evaluate several representative models, including Transformer (Vaswani et al., 2017) , Transformer with Pointer Generator (See et al., 2017) , T5(small&base) (Raffel et al., 2019) , BART(base) (Lewis et al., 2019) 1 . We test these models on four common datasets, including E2E (Novikova et al., 2017b) , WebNLG , WikiBio (Lebret et al., 2016 ), ToTTo (Parikh et al., 2020 ). Thus we can discuss the effectiveness of the pre-training method, some essential techniques and the number of parameters. We can also compare the differences between datasets and how they influence the models' performance. Empirically, we find that:1. Pre-training: Pre-training is powerful and effective which highly increases the ability of the Seq2Seq paradigm on the data-to-text task. | 0 |
The rise of machine learning (ML) methods and the availability of treebanks (Buchholz and Marsi, 2006) for a wide variety of languages have led to a rapid increase in research on data-driven dependency parsing (McDonald and Pereira, 2006; Nivre, 2008; Kiperwasser and Goldberg, 2016) . However, the performance of dependency parsers heavily relies on the size of corpus. Due to the great cost and difficulty of acquiring sufficient training data, ML-based methods cannot be trivially applied to low-resource languages.Cross-lingual transfer is a promising approach to tackle the lack of sufficient data. The idea is to train a cross-lingual model that transfers knowledge learned in one or multiple high-resource source languages to target ones. This approach has been successfully applied in various tasks, including part-of-speech (POS) tagging (Kim et al., 2017) , dependency parsing (McDonald et al., 2011) , named entity recognition (Xie et al., 2018) , entity linking , question answering (Joty et al., 2017) , and coreference resolution .A key challenge for cross-lingual parsing is the difficulty to handle word order difference between source and target languages, which often causes a significant drop in performance (Rasooli and Collins, 2017; Ahmad et al., 2019) . Inspired by the idea that POS sequences often reflect the syntactic structure of a language, we propose CURSOR (Cross lingUal paRSing by wOrd Reordering) to overcome the word order difference issue in crosslingual transfer. Specifically, we assume we have a treebank in the source language and annotated POS corpus in the target language 1 . We first train a POS-based language model on a corpus in the target language. Then, we reorder words in each sentence on the source corpus based on the POSbased language model to create pseudo sentences with target word order. The resulting reordered treebank can be used to train a cross-lingual parser with multi-lingual word embeddings.We formalize word reordering as a combinatorial optimization problem to find the permutation with the highest probability estimated by a POS-based language model. However, it is computationally difficult to obtain the optimal word order. To find a near-optimal result, we develop a population-based optimization algorithm. The algorithm is initialized with a population of feasible solutions and iteratively produces new generations by specially designed genetic operators. At each iteration, better solutions are generated by applying selection, crossover, and mutation subroutines to individuals in the previous iteration.Our contributions are summarized as follows:(i) We propose a novel cross-lingual parsing approach, called CURSOR, to overcome the word order difference issue in cross-lingual transfer by POS-guided word reordering. We formalize word reordering as a combinatorial optimization problem and develop a population-based optimization algorithm to find a near-optimal reordering result.(ii) Extensive experimentation with different neural network architectures and two dominant parsing paradigms (graph-based and transition-based) shows that our approach achieves an increase of 1.73% in average UAS, if English is taken as the source language and the performance is evaluated on other 25 target languages. Specifically, for the RNN-Graph model, our approach gains an increase of 2.5% in average UAS, and the improvement rises to 4.12% by the combination of our data augmentation and ensemble method.(iii) Our approach performs exceptionally well when the target languages are quite different from the source one in their word orders. For example, when transferring the English RNN-Graph parser to Hindi and Latin, our approach outperforms a baseline by 15.3% and 6.7%, respectively. | 0 |
Natural languages are rife with ambi guity. There are lexical ambiguities; words in isolation may be seen to have multiple syntactic and semantic senses. There are syntactic ambiguities; the same sequence of words may be viewed as constituting different structures. And finally, there are semantic and pragmatic ambiguities, all of which may be resolved in context. Ambi guity and its context-sensitive disambigua tion, it turns out, are two important charac-teristics of abductive inferences.There have been various attempts at • characterizing abductive inference and its explanatory nature [Appelt, 90; Charniak and McDermott, 85; Hobbs, et al., 88; Josephson, 90; Konolige, 90; Pople, 73; Reggia, 85; etc.] . While they diffe r some what in details, they all boil down to accounting for some obseIVed features using potential explanations consistently in a "parsimonious " (often "minimal") way. Over the past decade, a formal model for abduction based on these ideas was developed at Maryland; this theory is called parsimonious covering. Toe theory ori ginated in the context of simple diagnostic problems, but extended later for complex knowledge structures involving chaining of causal associations.A diagnostic problem specified in tenns of a set of obseIVed manifestations is solved in parsimonious covering by satisfying the coverage goal and the goal of parsimony. Satisfying the coverage goal requires accounting for each of the obseIVed manifestations through the known causal associations. Ambiguity arises here, because the same manifestation may be caused by any one of several candidate disorders. Ensuring that a cover contains a ' 'parsimonious'' set of disorders satisfies the goal of parsimony. There could poten tially be a large number of covers for the observed manifestations, but the ''parsi� monious'' ones from among them are expected to lead to more plausible diag noses. The plausible account for a manifes tation may be one disorder in one context and another disorder in a different context. Such contextual effects are to be handled automatically by the specifi c criterion of parsimony that is chosen.For medical diagnosis, reasonable cri teria of parsimony are minimal cardinality, irredundancy and relevancy [Peng, 85] . Minimal cardinality says that the diagnosis should contain the smallest possible number of disorders that can cover the observed symptoms. A cover is considered irredun dant (not redundant) if none of its proper subsets is also a cover, i.e., if the cover contains no disorder by removing which it can still cover the observed symptoms. Relevancy simply says that each disorder in the cover should be capable of causing at least one of the observed manifestations.Consider an abstract example where disorder d 1 can cause any of the manifestations m 1 and m 2 ; d 2 can cause any of m 1 , m 2 and m 3 ; d 3 can cause m 3 ; d 4 can cause m 3 and m 4 ; and finally, d 5 can cause m 4 • If the manifestations { m 1 , m 2 , �} were observed, the disorder set { � } constitutes a minimal cardinality cover; the irredundant covers that are not minimal cardinality cov ers are { d 1 , d 3 } and { d 1 , d 4 }; and an example of a redundant, _but relevant cover would be{ d 1 , d 3 , d 4 }. While { d 2 , d 5 }is a cover that has an irrelevant disorder (d 5 ) in it, { � ' d 4 } is a non-cover, since together the disorders in this set cannot account for all observed manifestations.Several natural language researchers have been actively involved in modeling abductive inferences that occur at higher levels in natural language, e.g., at the prag matics level. Abductive unifications that are required in perfonning motivation analysis, for instance, mightcall for making the least number of assumptions that might potentially prove false [Chamiak, 88] . Lit man uses a similar notion of unification, called consistency unification [Litman, 85] . Hobbs and his associates propose a method that involves minimizing the cost of abduc tive inference where the cost might involve several different components [Hobbs, et al ., 88] . Although [Charniak and McDermott, 85] indicate that word sense disambiguation might be viewed as abductive, nobody has pursued this line of research. It is very clear that there exists a strong analogybetween diagnostic parsimonious covering and concepts in natural language process ing.There are, however, important differences as well. These similarities and differences are summarized in Table I . We have tried to extend parsimonious cov ering to address some of the idiosyncrasies of language ( contrasted to diagnosis) and apply it to low level natural language pro cessing. | 0 |
Paraphrases have been applied to various NLP applications, and recently, they are recognized as a useful resource for natural language understanding, such as semantic parsing (Berant and Liang, 2014) and automatic question answering (Dong et al., 2017) . While most previous studies focused on sentential paraphrase detection, e.g., (Dolan et al., 2004) , finer grained paraphrases, i.e., phrasal paraphrases, are desired by the applications. In addition, syntactic structures are important in modeling sentences, e.g., their sentiments and semantic similarities (Socher et al., 2013; Tai et al., 2015) . A few studies worked on phrasal paraphrase identification on sentential paraphrases (Yao et al., 2013) ; however, the units of correspondence in previous studies are defined as sequences of words and not syntactic phrases due to difficulties caused by the non-homographic nature of phrase correspondences. To overcome these challenges, one promising approach is phrase alignment on paraphrasal sentence pairs based on their syntactic structures derived by linguistically motived grammar. A flexible mechanism to allow noncompositional phrase correspondences is also required. We have published our initial attempt on this direction (Arase and Tsujii, 2017) . For systematic research on syntactic phrase alignment in paraphrases, an evaluation dataset as well as evaluation measures are essential. Hence, we constructed the SPADE (Syntactic Phrase Alignment Dataset for Evaluation) and released it through Linguistic Data Consortium 1 (catalog ID: LDC2018T09 2 ). In the SPADE, 201 sentential paraphrases are annotated gold parse trees, on which 20, 276 phrases exist. Three annotators annotated align-ments among these phrases as shown in Figure 1 , resulted in 15, 721 alignments that at least an annotator regarded as paraphrases. We also propose two measures to evaluate the quality of phrase alignment on the SPADE, which have been used as official evaluation metrics in (Arase and Tsujii, 2017) . These measures allow objective comparison of performances of different methods evaluated on the SPADE. | 0 |
Conversation within a dialogue can be thought of as an exchange of utterances between two speakers. Each utterance is not independent of one another but is instead grounded within a larger dialogue context known to both parties (Jurafsky and Martin, 2018; Sordoni et al., 2015; Serban et al., 2016; Dziri et al., 2019) . Indeed, if a response to an utterance fails to be faithful to some given knowledgei.e. by producing false information-it is uninformative and runs the risk of jeopardizing the entire enterprise of conversation. More precisely, this means that in addition to being fluent, topical, and grammatical, utterances within a dialogue must also be factually correct.The faithfulness of responses is of principal importance when designing dialogue systems that are grounded using auxiliary knowledge such as Knowledge Graphs (KG). Despite maintaining plausible general linguistic capabilities, dialogue models are still unable to fully discern facts and may instead hallucinate factually invalid information. Moreover, empirical evidence for hallucination in Language Models (LM) runs contrary to known studies that these large models are capable of recalling factual knowledge, e.g. entities and relations in a KG, (Roberts et al., 2020; Petroni et al., 2019) . This suggests that this inherent lack of controllability may be remedied by leveraging external oracle knowledge. However, existing approaches to knowledge grounding often suffer from a source-reference divergence problem whereby the reference contains additional factual information and simply training on the reference is insufficient to guarantee faithfulness (Wiseman et al., 2017; Parikh et al., 2020; Tian et al., 2019) . Consequently, ensuring the faithfulness of knowledge grounded dialogue systems-via precise alignment of the source and reference-remains an open challenge. Present Work. In this work, we focus on address-ing the open problem of hallucination of factually invalid statements in knowledge grounded dialogue systems where the source of knowledge is a KG. We first identify prominent modes of hallucination by conducting a systematic human study on generated responses which reveals one major source of hallucination as the (mis)-use of wrong entities to describe factual content (Kryscinski et al., 2020) , a problem that persists when naively applying language models in dialogue systems.To enforce faithfulness to the misattribution of entities in grounded dialogue systems, we introduce NEURAL PATH HUNTER (NPH), a module that operates on hallucinated responses. NPH follows a generate-then-refine approach by augmenting conventional dialogue generation with an additional refinement stage enabling the dialogue system to correct potential hallucinations by querying the KG. NPH grounds dialogue generation by constraining the flow of conservation to be supported by a valid path on the KG. To do so, the module combines a token-level hallucination critic that masks out entities of concern in an utterance, followed by a pre-trained nonautoregressive LM which prescribes contextual representations for each masked entity. This is then fed sequentially to an autoregressive LM to obtain output representations. These output representations can then be used to efficiently launch a query on the KG-effectively modelling dialogue as a signal being propagated on a local k-hop subgraph whereby locality is enforced through the conversation history-returning factually correct entities. Our proposed approach is applicable to any generated response whenever an available KG is provided and works without further fine-tuning. The high-level overview of our proposed approach is outlined in Fig. 1 and exemplar machine-generated responses post-refinement are presented in Table 8 in §H. Our main contributions are summarized as follows:• We conduct a comprehensive human study on hallucinations generated by state-of-the-art dialogue systems which reveals that the main mode of hallucinations is through the injection of erroneous entities in generated responses.• We propose NEURAL PATH HUNTER, which leverages facts supplied by a KG to reduce hallucination in any machine-generated response.• We empirically demonstrate that NEURAL PATH HUNTER substantially reduces hallucinations in KG-grounded dialogue systems with a relative improvement of 20.35% in FeQA, a QA-based faithfulness metric (Durmus et al., 2020) , and an improvement of 39.98% in human evaluation. | 0 |
Sentiment analysis of text documents has received considerable attention recently (Shanahan et al., 2005; Turney, 2002; Dave et al., 2003; Hu and Liu, 2004; Chaovalit and Zhou, 2005) . Unlike traditional text categorization based on topics, senti-ment analysis attempts to identify the subjective sentiment expressed (or implied) in documents, such as consumer product or movie reviews. In particular Pang and Lee proposed the rating-inference problem (2005) . Rating inference is harder than binary positive / negative opinion classification. The goal is to infer a numerical rating from reviews, for example the number of "stars" that a critic gave to a movie. Pang and Lee showed that supervised machine learning techniques (classification and regression) work well for rating inference with large amounts of training data.However, review documents often do not come with numerical ratings. We call such documents unlabeled data. Standard supervised machine learning algorithms cannot learn from unlabeled data. Assigning labels can be a slow and expensive process because manual inspection and domain expertise are needed. Often only a small portion of the documents can be labeled within resource constraints, so most documents remain unlabeled. Supervised learning algorithms trained on small labeled sets suffer in performance. Can one use the unlabeled reviews to improve rating-inference? Pang and Lee (2005) suggested that doing so should be useful.We demonstrate that the answer is 'Yes.' Our approach is graph-based semi-supervised learning. Semi-supervised learning is an active research area in machine learning. It builds better classifiers or regressors using both labeled and unlabeled data, under appropriate assumptions (Zhu, 2005; Seeger, 2001 ). This paper contains three contributions:• We present a novel adaptation of graph-based semi-supervised learning (Zhu et al., 2003) to the sentiment analysis domain, extending past supervised learning work by Pang and Lee (2005) ;• We design a special graph which encodes our assumptions for rating-inference problems (section 2), and present the associated optimization problem in section 3;• We show the benefit of semi-supervised learning for rating inference with extensive experimental results in section 4. | 0 |
Text-based Question Answering (QA) is a hot research topic and the increasing availability of electronic text will ensure that research in this area will continue for long. Much of the current research on QA focuses on the development of methodologies for processing relatively large volumes of text. For example, the competitionbased QA track of the Text REtrieval Conference (TREC) (Voorhees, 2001 ) uses more than 3Gb of source text. Competing systems often exploit the data redundancy existing in the source text. Some of them even use the Web to increase the data redundancy (Brill et al., 2001; Clarke et al., 2001 , for example). These systems typically trade accuracy for speed and avoid the use of intensive natural language processing techniques.Most of the current QA systems are based on an architecture like that of Figure 1 (Hirschman and Gaizauskas, 2001; Voorhees, 2001) . In an off-line or indexing stage, an indexing module analyses the text documents and creates a set of document images that will be used by the subsequent QA modules. In an on-line stage, a question analysis module classifies the question and determines the type of the expected answer. The question analysis module would typically return a list of named-entity types that are compatible with the question (for example a who question typically indicates people or organisations). The module may also produce an image of the question. This image may be similar in format to the document images and can range from a simple bag of words (Cooper and Rüger, 2000 , for example) to a fairly complex logical form (Harabagiu et al., 2001; Elworthy, 2000, for example) . Once the question is analysed, a document preselection module identifies the documents that are most likely to contain the answer. This module typically uses information retrieval techniques that rely on bag-of-words approaches and statistical information (Voorhees, 2001) . A filtering module examines the resulting documents and selects or rewards the named entities that are compatible with the question type. A scoring module then performs a more intensive analysis and ranks the preselected named entities (an possibly surrounding text) according to their likelihood to contain the answer. The scoring system relies on the output given by the question analysis module and possibly the images of the preselected documents that were created during the off-line stage. There may be feedback loops between the document preselection, filtering, and scoring modules to increase the likelihood of find- ing difficult answers (Harabagiu et al., 2001 , for example).The scoring methodology used can be as simple as a word overlap or word frequency count, or as complex as an automatic proof system that operates on logical forms of both the questions and the answers. In some QA systems the scoring system relies heavily on the use of sentence patterns (Soubbotin, 2001, for example) .In the present study we have implemented a QA framework that uses simplifications of the modules described above. The emphasis in this study has been placed on the comparison of core methodologies for the scoring stage. To avoid the introduction of unwanted variables, we have avoided the use of methodologies that rely on world knowledge or domain knowledge. Thus, we have refrained from testing the use of external resources or inference systems.With this QA framework we intend to assess the impact of syntactic and semantic information in a QA task. For that reason, we include information regarding word dependencies, grammatical relations, and logical forms in the procedure to measure the similarity between a question and an answer candidate. The results of the evaluations show both the impact of the scoring measures and the impact of the parsers used to extract the syntactic and semantic information. Section 2 describes grammatical relations and their use in an overlap measure. Section 3 focuses on the overlap of flat logical forms. Section 4 introduces the QA system that was used for the evaluations, and Section 5 explains the methodology used in the automatic evaluations. Finally, Section 6 discusses the results and Section 7 concludes and points to future lines of research. | 0 |
In hybrid context dependent DNN-HMM speech recognition methods, an artificial neural network (ANN) with multiple non-linear hidden layers is trained to output posterior probabilities of output frame labels corresponding to tied HMM triphone states (senones). The input of a higher-dimensional feature vector is composed from consecutive concatenated frames in which Mel-Frequency Cepstral Coefficients (MFCC) or filter-bank features are successively normalized and transformed on a per speaker basis. While the alignments and output labels for DNN training come from the GMM-HMM system with a set of HMM triphone states and their corresponding Gaussian models. Due to power of DNN and its efficiency in learning discriminative features, recent attempts have used much higher dimensional input features to DNN-HMM systems such as the I-vector based speaker adaptive training (SAT) techniques, in which several hundred dimensional per-speaker I-vector features as extra input are added to the conventional 40 dimensional GMM-SAT features as the final input to the neural net (Karafiátet al., 2011; Gupta et al,, 2014) . The GMM-SAT features are spliced across several to tens of consecutive frames rather than just one frame in a GMM system. Related works have been done using syllable-based acoustic modeling on large-vocabulary continuous speech recognition (LVCSR) for both monosyllabic and polysyllabic languages, including Mandarin (Lee et al., 1993; Pan et al., 2012; Deng Li and Li Xiao, 2013; Hu et al., 2014; X. Li, and X. Wu, 2014) and West languages (Hinton et al., 2012; Swietojanski et al., 2013; Gupta & Boulianne, 2013; Schmidhuber 2015) . However, automatic STT on Cantonese is far behind. This motivated us to investigate SST for Cantonese. In this work, all models are implemented using the Kaldi toolkit. The rest of the paper is organized as follows. Section 2 presents both the acoustic and language models used in our system. Section 3 describes the baseline DNN-HMM system which does not use I-vector for speaker adaptation, followed by the DNN-HMM system with speaker adaptation in Section 4. Experiments on the system-wide parameter tuning and performance evaluation will be discussed in Section 5. Section 6 reports the conclusion and the future works. | 0 |
Despite the progress on LGBT+ rights, Internet still remains a hostile environment for LGBT+ people. The growing number, intensity, and complexity of online hate cases is also reflected in the real world: Anti-LGBT+ hate crimes increased dramatically in the last three years. 1 In 2020, the UK's LGBT+ anti-violence charity (Galop) presented a report about online hate crimes regarding homophobia, biphobia, and transphobia. 2 They surveyed 700LGBT+ people distributed through online community networks of LGBT+ activists and individuals. The results are worrisome: 8 out of 10 people experienced online hate speech in the last five years, and 1 out of 5 said they had been victims of online abuse at least 100 times. Transgender people experience online harassment at a higher rate (93%) than cisgender ones (70%). It is also alarming that 18% of people claimed that online abuse was linked with offline incidents. These statistics show a worrying picture of the everyday experience that LGBT+ people are living.Natural language processing (NLP) has emerged as a significant field of research for combating online hate speech because of its ability to automate the process at scale while, at the same time, decreasing the labor and emotional stress on online moderators (Chaudhary et al., 2021) . Despite the interest of the NLP community in creating datasets and models for the task of hate speech detection, no research effort has been made to cover homophobia and transphobia specifically. This is a problem because Nozza (2021) has demonstrated that hate speech detection models do not transfer to different hate speech target types.The shared task of Homophobia and Transphobia Detection (Chakravarthi et al., 2022) enabled researchers to investigate solutions for this problem with the introduction of a novel dataset. The dataset comprises around 5k YouTube comments manually annotated with respect to the presence of homophobia and transphobia. The corpus shows a high imbalance with respect to the non-hateful class, which covers 95% of the dataset. In this paper, we propose an approach designed to overcome the problem of class imbalance. We use ensemble modeling to combine different fine-tuned large language models. We also perform data augmentation from an external dataset to include more homophobic and transphobic instances. However, data augmentation results in lower performance, and we did not use it for the submission.Our system ranked third for the English track with a macro F1-score of 0.48 and a weighted F1score of 0.94. | 0 |
Entity coreference resolution has become a critical component for many Natural Language Processing (NLP) tasks. Systems requiring deep language understanding, such as information extraction (Wellner et al., 2004) , semantic event learning (Chambers and Jurafsky, 2008; Chambers and Jurafsky, 2009) , and named entity linking (Durrett and Klein, 2014; Ji et al., 2014) all benefit from entity coreference information.Entity coreference resolution is the task of identifying mentions (i.e., noun phrases) in a text or dialogue that refer to the same real-world entities. In recent years, several supervised entity coreference resolution systems have been proposed, which, according to Ng (2010) , can be categorized into three classes -mention-pair models (McCarthy and Lehnert, 1995) , entity-mention models (Yang et al., 2008a; Haghighi and Klein, 2010; Lee et al., 2011) and ranking models (Yang et al., 2008b; Durrett and Klein, 2013; Fernandes et al., 2014) -among which ranking models recently obtained state-of-the-art performance. However, the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages (Ma and Xia, 2014) . That makes unsupervised approaches, which only require unannotated text for training, a desirable solution to this problem.Several unsupervised learning algorithms have been applied to coreference resolution. Haghighi and Klein (2007) presented a mention-pair nonparametric fully-generative Bayesian model for unsupervised coreference resolution. Based on this model, Ng (2008) probabilistically induced coreference partitions via EM clustering. Poon and Domingos (2008) proposed an entity-mention model that is able to perform joint inference across mentions by using Markov Logic. Unfortunately, these unsupervised systems' performance on accuracy significantly falls behind those of supervised systems, and are even worse than the deterministic rule-based systems. Furthermore, there is no previous work exploring the possibility of developing an unsupervised ranking model which achieved state-of-theart performance under supervised settings for entity coreference resolution.In this paper, we propose an unsupervised generative ranking model for entity coreference resolution. Our experimental results on the English data from the CoNLL-2012 shared task (Pradhan et al., 2012) show that our unsupervised system outperforms the Stanford deterministic system (Lee et al., 2013) by 3.01% absolute on the CoNLL official metric. The contributions of this work are (i) proposing the first unsupervised ranking model for entity coreference resolution. (ii) giving empirical evaluations of this model on benchmark data sets. (iii) considerably narrowing the gap to supervised coreference resolution accuracy. | 0 |
Abstractive summarization is the challenging NLG task of compressing and rewriting a document into a short, relevant, salient, and coherent summary. It has numerous applications such as summarizing storylines, event understanding, etc. As compared to extractive or compressive summarization (Jing and McKeown, 2000; Knight and * Equal contribution. Marcu, 2002; Clarke and Lapata, 2008; Filippova et al., 2015; Henß et al., 2015) , abstractive summaries are based on rewriting as opposed to selecting. Recent end-to-end, neural sequence-tosequence models and larger datasets have allowed substantial progress on the abstractive task, with ideas ranging from copy-pointer mechanism and redundancy coverage, to metric reward based reinforcement learning (Rush et al., 2015; Chopra et al., 2016; Nallapati et al., 2016; See et al., 2017) .Despite these strong recent advancements, there is still a lot of scope for improving the summary quality generated by these models. A good rewritten summary is one that contains all the salient information from the document, is logically followed (entailed) by it, and avoids redundant information. The redundancy aspect was addressed by coverage models (Suzuki and Nagata, 2016; Chen et al., 2016; Nallapati et al., 2016; See et al., 2017 ), but we still need to teach these models about how to better detect salient information from the input document, as well as about better logicallydirected natural language inference skills.In this work, we improve abstractive text summarization via soft, high-level (semantic) layerspecific multi-task learning with two relevant auxiliary tasks. The first is that of document-toquestion generation, which teaches the summarization model about what are the right questions to ask, which in turn is directly related to what the salient information in the input document is. The second auxiliary task is a premise-to-entailment generation task to teach it how to rewrite a summary which is a directed-logical subset of (i.e., logically follows from) the input document, and contains no contradictory or unrelated information. For the question generation task, we use the SQuAD dataset (Rajpurkar et al., 2016) , where we learn to generate a question given a sentence containing the answer, similar to the recent work by Du et al. (2017) . Our entailment generation task is based on the recent SNLI classification dataset and task (Bowman et al., 2015) , converted to a generation task .Further, we also present novel multi-task learning architectures based on multi-layered encoder and decoder models, where we empirically show that it is substantially better to share the higherlevel semantic layers between the three aforementioned tasks, while keeping the lower-level (lexico-syntactic) layers unshared. We also explore different ways to optimize the shared parameters and show that 'soft' parameter sharing achieves higher performance than hard sharing.Empirically, our soft, layer-specific sharing model with the question and entailment generation auxiliary tasks achieves statistically significant improvements over the state-of-the-art on both the CNN/DailyMail and Gigaword datasets. It also performs significantly better on the DUC-2002 transfer setup, demonstrating its strong generalizability as well as the importance of auxiliary knowledge in low-resource scenarios. We also report improvements on our auxiliary question and entailment generation tasks over their respective previous state-of-the-art. Moreover, we significantly decrease the training time of the multitask models by initializing the individual tasks from their pretrained baseline models. Finally, we present human evaluation studies as well as detailed quantitative and qualitative analysis studies of the improved saliency detection and logical inference skills learned by our multi-task model. | 0 |
Many NLP tasks can be phrased as decision problems over complex linguistic structures. Successful learning depends on correctly encoding these (often latent) structures as features for the learning system. Tasks such as transliteration discovery (Klementiev and , recognizing textual entailment (RTE) (Dagan et al., 2006) and paraphrase identification (Dolan et al., 2004) are a few prototypical examples. However, the input to such problems does not specify the latent structures and the problem is defined in terms of surface forms only. Most current solutions transform the raw input into a meaningful intermediate representation 1 , and then encode its structural properties as features for the learning algorithm.Consider the RTE task of identifying whether the meaning of a short text snippet (called the hypothesis) can be inferred from that of another snippet (called the text). A common solution (MacCartney et al., 2008; is to begin by defining an alignment over the corresponding entities, predicates and their arguments as an intermediate representation. A classifier is then trained using features extracted from the intermediate representation.The idea of using a intermediate representation also occurs frequently in other NLP tasks (Bergsma and Kondrak, 2007; Qiu et al., 2006) .While the importance of finding a good intermediate representation is clear, emphasis is typically placed on the later stage of extracting features over this intermediate representation, thus separating learning into two stages -specifying the latent representation, and then extracting features for learning. The latent representation is obtained by an inference process using predefined models or welldesigned heuristics. While these approaches often perform well, they ignore a useful resource when generating the latent structure -the labeled data for the final learning task. As we will show in this paper, this results in degraded performance for the actual classification task at hand. Several works have considered this issue (McCallum et al., 2005; Goldwasser and Roth, 2008b; Chang et al., 2009; Das and Smith, 2009) ; however, they provide solutions that do not easily generalize to new tasks.In this paper, we propose a unified solution to the problem of learning to make the classification decision jointly with determining the intermediate representation. Our Learning Constrained Latent Representations (LCLR) framework is guided by the intuition that there is no intrinsically good intermediate representation, but rather that a representation is good only to the extent to which it improves performance on the final classification task. In the rest of this section we discuss the properties of our framework and highlight its contributions.Learning over Latent Representations This paper formulates the problem of learning over latent representations and presents a novel and general solution applicable to a wide range of NLP applications. We analyze the properties of our learning solution, thus allowing new research to take advantage of a well understood learning and optimization framework rather than an ad-hoc solution. We show the generality of our framework by successfully applying it to three domains: transliteration, RTE and paraphrase identification.Joint Learning Algorithm In contrast to most existing approaches that employ domain specific heuristics to construct intermediate representations to learn the final classifier, our algorithm learns to construct the optimal intermediate representation to support the learning problem. Learning to represent is a difficult structured learning problem however, unlike other works that use labeled data at the intermediate level, our algorithm only uses the binary supervision supplied for the final learning problem.Flexible Inference Successful learning depends on constraining the intermediate representation with task-specific knowledge. Our framework uses the declarative Integer Linear Programming (ILP) inference formulation, which makes it easy to define the intermediate representation and to inject knowledge in the form of constraints. While ILP has been applied to structured output learning, to the best of our knowledge, this is the first work that makes use of ILP in formalizing the general problem of learning intermediate representations. | 0 |
One of the most complicated phenomena in English is conjunction constructions. Even quite simple noun phrases like (i) Cats with whiskers and tails are structurally ambiguous and would cause problem when translated from English to, sa~-, Chinese. Since in Chinese all the modifiers of the noun should go before it, two different translations in Chinese might be got from the above phrase:(la) (With whiskers and tails) de (cats) ("de" is a particle which connects the modifiers and the modifieds);(ib) ((With whiskers) de (cats)) and (tails).Needless to say, a machine translation system should be able to analyse correctly among ether things the conjunction constructions before high quality translation can be achieved.As is well known, ATN (Augmented Transition Network) grammars are powerful in natural language parsing and have been widely applied in various NL processing systems. However, the standard ATN grsamars are rather weak in dealing with conjunctions.In (Woods 73 ), a special facility SYSCONJ for processing conjunctions was designed and implemented in the LUNAR speech question-answering system. It is capable of analysing reduced conjunctions impressively (eg, "John drove his car through and completely demolished a plate glass window"), but it has two drawbacks: first, for the processing of general types of conjunction constructions, it is too costly and too inefficient; secondly, the method itself is highly non-deterministic and easily results in combinatorial explosions.In (Blackwell 81 ), a WRD AND arc was proposed. The arc would take the interpreter from the final to the initial state of a computation, then analyse the second argument of a coordinated construction on a second pass through the ATN network. With this method she can deal with some rather complicated conjunction constructions, but in fact a WRD AND arc could have been added to nearly every state of the network, thus making the grammar extremely bulky. Furthermore, her syste~ lacks the power for resolving the ambiguities contained in structures like (1).In the machine translation system designed by (Nagao et al 82) , when dealing with conjunctions, only the nearest two items of the same parts of speech were processed, while the following types of coordinated conjunctions were not analysed correctly:(noun + prep + noun) + and + (noun + prep + noun); (adj + noun) + and + noun. (Boguraev in press) suggested that a demon should be created which would be woken up when "and" is encountered. The demon will suspend the normal processing, inspect the current context (the local registers which hold constituents recognised at this level) and recent history, and use the information thus gained to construct a new ATN arc dynamically which seeks to recognise a constituent categorially similar to the one just completed or being currently processed. Obviously the demon is based on expectations, but what follows the "and" is extremely uncertain so that it would be very difficult for the demon to reach a high efficiency. A kind of "data-driven" alter-native which may reduce the non-determinism is to try to decide the scope of the left conjunct retrospectively by recognising first the type of the right conjunct, rather than to predict the latter by knowing the category of the constituent to the left of the coordinator which is "just completed or being currently processed" --an obscure or even misleading specification. the ball.. The man kicked the child and threw the ball.Exlh. The man kicked and threw the ball.ExlS. The man kicked and the woman threw the ball.CASSEX (Chinese Academy of Social Sciences;University of Essex) is an ATN parser based on part of the programs developed by Boguraev (1979) which was designed for the automatic resolution of linguistic ambiguities. Conjunctions, one major source of linguistic ambiguities, however, were not taken into consideration there because, as the author put it himself, "they were felt to be too large a problem to be tackled along with all the others" (Boguraev 79, 1.6).A new set of grammars has been written, and a lot of modifications has been made to the grammar interpreter, so that conjunctions could be dealt with within the ATN framework.The following are the example sentences rectly parsed by the package:Exl. The man with the telescope and the brella kicked the ball.Ex2.Ex~.Ex6.ExT.Ex8.ExlO.ExlI.ExI2.The man with the telescope and the umbrella with a handle kicked the ball.The man with the telescope and the woman kicked the ball.The man with the telescope and the woman with the umbrella kicked the ball.The man with the child and the woman kicked the ball.The man with the child and the woman with the umbrella kicked the ball.The man with the child and the woman is kicking the ball.The man with the child and the woman are kicking the ball.The man with the child and the umbrella fell.The man kicked the ball and the child threw the ball.The man kicked the ball and the child.The man kicked the child and the woman III ELEMENTARY NP AND EXPANDED NP The term 'elementary NP' is used to indicate a noun phrase which can be embedded in but has no other noun phrases embedded in it. A noun phrase which contains other, embedded, NPs is called 'expanded Np,.Thus, when analysing the sentence fr84~ment "the man with the telescope and the woman with the umbrella", we will have four elementary NPs ("the man", "the telescope", "the woman" and "the umbrella") and two expanded NPs ("the man with the telescope" and "the woman with the umbrella"). We may well have a third kind of NP, the coordinated NP with conjunction in it, but it is the result of, rather than the material for, conjunction processing, and therefore will not receive particular attention. In the text followed we will use 'EL-NP' and 'EXP-NP' to represent the two types of noun phrases, respectively.LEFT-PART will stand for the whole fragment to the left of the coordinator;andRIGHT-PART for the fragment to the right of it. LEFT-WORD and RIGHT-WORD will indicate the word immediately precedes and follows, respectively, the coordinator. The conjunct to the right of the coordinator will be called RIGHT-PHRASE.Constraints for determining the grammaticalness of constructions involving coordinating conjunctions have been suggested by linguists, among which are (Ross 67)'s CSC (Coordinate Structure Constraint), (Schachter 77)'s CCC (Coordinate Constituent Constraint), (Williams 78)'s Across-the-Board (ATB) Convention, and (Gazdar @l)'s nontransformational treatment of coordinate structures using the conception of 'derived categories'. These constraints are useful in the investigation of coordination phenomena,but in order to process coordinating structures automatically, some constraint defined from the procedural point of view is still required.The following ordered rules, named CSDC (Conjuncts Scope Determination Constraints), are suggested and embodied in the CASSEX package so as to meet the need for automatically deciding the scope of the conjuncts:i. Syntactical constraint.The syntactical constraint has two parts:i.i The conjuncts should be of the same syntactical category;1.2 The coordinated constituent should be in conformity syntactically with the other constituents of the sentence, eg if the coordinated constituent is the subject, it should agree with the finite verb in terms of person and number.Acoording to this constraint, Ex8 should be analysed as follows (the representation is a tree diagram with 'CLAUSE' as the root and centred around the verb, with various case nodes indicating the dependency relationships between the verb and the other constituents):( CLAUSE (TYPE DCL) (QUERY NIL) (TNS PRESENT) (ASPECT PHOGRESSIVE) ( MODALITY NIL) (NEG NIL) (v (KICK ((*ANI SUBJ) ( (*PHYSOB OBJE) ( (THIS (MAN PART) ) INST) STRIK) )* (OBJECT ((BALL1 ,..)) (NLg~ER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETI ONE) ) ) ( AGENT AND ((MAN ...) (NUMBER SINGLE) (QUANTIFIER SG) (DETERMINER ((DETIONE)) ) (ATTRIBUTE ((PREP (PREP WITH)) ( (CHILD ...) (NUMBER ... ) ((woMAN ... )while Ex7 (and the more general case of ExS) should be analysed roughly as: NPs whose head noun semantic nrimitives are the same should be preferred when deciding the scope of the two conjuncts coordinated by "and". However, if no such NPs can be found, NPs with different head noun semantic primitives are coordinated anyhow.Cf (Wilks 75 ).According to rule 2, Exl should be roughly represented as 'The man with (AND (telescope) (umbrella))'; Ex2, 'The man with (AND (telescope) (umbrella with a handle))'; Ex3, '(AND (man with telescope) (woman))' and Exh, '(AND (man with telescope) (woman with umbrella))' 3. Symmetry constraint.When rules i and 2 are not enough for deciding the scope of the conjuncts, as for Ex5 and Ex6, this rule of preferring conjuncts with symmetrical pre-modifiers and/or post-modifiers will be in effect:Ex5 .... with (AND (child) (woman)) ... If all the three rules above cannot help, the NP to the left of "and" which is closest to the coordinator should be coordinated with the NP immediately following the coordinator:Ex9. The man with (AND (child) (umbrella)) fell.The seemingly straightforward way for dealing with conjunctions using the ATN grammars would be to add extra WRD AND arcs to the existing states, as (Black-well 81) proposed. The problem with this method is that, as (Boguraev in press) pointed out, "generally speaking, one will need WRD AND arcs to take the ATN interpreter from just about every state in the network back toalmosteach preceding state on the same level, thus introducing large overheads in terms of additional arcs and complicated tests."Instead of adding extra WRD AND arcs to the existing states in a standard ATN gra~,nar, I set up a whole set of states to describe coordination phenomena. The first few states in the set are as follows:(CONJ/ At the moment only ((JUMP AND/) "and" is taken into (EQ (GETR CONJUNCTION)consideration. .o.)The CONJ/ states can be seen as a subgrammr which is separated from the main (conventional) ATN grezmar, and is connected with the main grammar via the interpreter.The parser works in the following way.Before a conjunction is encountered, the parser works normally except that two extra stacks are set: **NP-STACK and **PREP-STACK. Each NP, either EL-NP or EXP-NP, is pushed into **NP-STACK,together with a label indicating whether the NP in question is a subject (SUBJ) or an object (OBJ) or a preposition object (NP-IN-NMODS).The interpreter takes responsibility of looking ahead one word to see whether the word to come is a conjunction. This happens when the interpreter is processing "word-consuming" arcs, ie CAT, WRD, MEM and TST arcs. Hence no need for explicitly writing into the grammar WRD AND arcs at all.By the time a conjunction is met, while the interpreter is ready to enter the CONJ/ state, either a clause (ExlO-13) or a noun phrase in subject position (Exl-9) would have been POPed, or a verb (Exlh-15) would have been found. For the first case, a flag LEFT-PART-IS-CLAUSE will be set to true, and the interpreter will t~j to parse RIGHT-PART as a clause. If it succeeds, the representation of a sentence consisted of two coordinated clauses will be outputted. If it fails, a flag RIGHT-PART-IS-NOT-CLAUSE is set up, and the sentence will be reparsed. This time the left-part will not be treat -ed as a clause, and a coordinated NP object will be looked for instead. ExlO and Exll are examples of coordinated clauses and coordinated NP object, respectively. One case is treated specially: when LEFT-PART-IS-CLAUSE is true and RIGHT-WORD is a verb (Exl3), the subject will be copied from the left clause so that a right clause could be built.For the second case, a coordinated NP subject will be looked for. Eg, for Exh, by the time "and" is met, an I~P "the man with the telescope" would have been POPed, and the state of affairs or the **NP-STACK would be like this: After the excution of the arc ((PUSH NP) (NP-START)), RIGHT-PHRASE has been found. If it has an PP modifier, a register NMODS-CONJ will be set to the value of the modifier. Now the NPs in the **NP-STACK will be POPed one by one to be compared with the right phrase semantically. The NP whose formula head (the head of the NOUN in it) is the same as that of the right conjunct will be taken as the proper left conjunct. If the NP matched is a subject or object, then a coordinative NP subject or object will be outputted; if it is an EL-NP in a PP modifier, then a function REBUILD-SUBJ or REBUILD-OBJ, depending on whether the modified EXP-NP is the subject or the object, will be called to re-build the EXP-NP whose PP modifier should consist of a preposition and two coordinated NPs.Here one problem arises: for Ex5, the first NP to be compared with the right phrase ("the woman") would be "the man with the child" whose head noun "~usn" would be matched to "woman" but, according to our Symmetry Constraint, it is "child" that should be matched. In order to implement this rule, whenever NMODS-CONJ is empty (meaning that the right NP has no post-modifier), the **NP-STACK should be reversed so that the first NP to be tried would be the one nearest to the coordinator (in this case "the child").For the third case (LEFT-WORD is a transitive verb and the object slot is empty, Exs lh and 15), right clause will be built first, with or without copying the subject from LEFT-PART depending on whether a subject can be found in RIGHT-PART.Then, the left clause will be completed by copying the object from the right clause, and finally a clausal coordination representation will be returned.In the course of parsing, whenever a finite verb is met, the NPs at the same level as the verb and havin~ been PUSHed into the **NP-STACK should be deleted from it so that when constructing p(ssible coordinative NP object, the NPs in the subject position would not confuse the matching. Exll is thus correctly analysed.The package is written in RUTGERS-UCI LISP and is implemented on the PDP-IO computer at the University of Essex. It performs satisfactorily. However, there is still much work to be done. For instance, the most efficient way for treating reduced conjunctions is to be found.Another problem is the scope of the pre-modifiers and post-modifiers in coordinate constructions, for the resolution of which the Symmetry constraint may prove inadiquate (eg, it cannot discriminate "American history and literature" and "American histolv and physics").It is hoped that an ATN parser capable of desling with a large variety of coordinated constructions in an efficient way will finally emerge from the present work. | 0 |
Microblogging sites has become a common way of reflecting peoples' opinion. Unlike the regular blogs, the size of a message on a microblogging site is relatively small. The need to automatically detect and summarize the sentiment of messages from users on a given topic or product has gained the interest of researchers.The sentiment of a message can be negative, positive, or neutral. In the broader sense, automatically detecting the polarity of a message would help business firms easily detect customers' feedback on their product or services. Which in turn helps them improve their decision making by providing information of user preferences, product trend, and user categories. (Chew and Eysenbach, 2010; Salethe and Khandelwal,2011) . Sentiment analysis is also used in other domains. (Mandel et al.,2012) .Twitter is one of the mostly widely used microblogging web site with over 200 million users send over 400 million tweets daily (September 2013) . A peculiar characteristic of a Twitter data are as follow: emoticons are widely used, the maximum length of a tweet is 140 character, some words are abbreviated, or some are elongated by repeating letters of a word multiple times.The organizers of the SemEval-2014 has provided a corpus of tweets and posted a task to automatically detect their respective sentiments.Sub task B of Task 9: Sentiment Analysis on Twitter is describe as follows"Given a message, classify whether the message is of positive, negative, or neutral sentiment. For messages conveying both a positive and negative sentiment, whichever is the stronger sentiment should be chosen."This paper describes the system submitted by KUNLBLab for participation in SemEval-2014 Task 9 subtask B. Models were trained using the LIBLINEAR classification library (Fan et al., 2008) . An accuracy of 66.11% is attained by the classifier by testing on the development set.The remaining of the document is organized as follows: Section 2 presents a brief literature review on sentiment analysis on Twitter data. Section 3 discusses the system developed to solve the above task, characteristics of the dataset, prepressing on the dataset, and various feature representation. Section 4 illustrates the evaluation results. Section 5 presents conclusion and remarks. | 0 |
One of the overarching goals of Augmentative and Alternative Communication technology is to help impaired users communicate more quickly and more naturally. Over the past thirty years, solutions that attempt to reduce the amount of effort needed to input a sentence have include semantic com-paction (Baker, 1990) , and lexicon-or languagemodel-based word prediction (Darragh et al., 1990; Higginbotham, 1992; Li and Hirst, 2005; Trost et al., 2005; Trnka et al., 2006; Trnka et al., 2007; Wandmacher and Antoine, 2007) , among others. In recent years, there has been an increased interest in whole utterance-based and discourse-based approaches (see Section 2). Such approaches have been argued to be beneficial in that they can speed up the conversation, thus making it appear more felicitous . Most commercial tablets sold as AAC devices contain an inventory of canned phrases, comprising such items as common greetings, polite phrases, salutations and so forth. Users can also enter their own phrases, or indeed entire sequences of phrases (e.g., for a prepared talk).The work presented here attempts to take whole phrase prediction one step further by automatically predicting appropriate responses to utterances by mining conversational text. In an actual deployment, one would present a limited number of predicted phrases in a prominent location on the user's device, along with additional input options. The user could then select from these phrases, or revert to other input methods. In actual use, one would also want such a system to incorporate speech recognition (ASR), but for the present we restrict ourselves to typed text -which is perfectly appropriate for some modes of interaction such as on-line social media domains. Using a corpus of 72 million words from American soap operas, we isolate features useful in predicting an appropriate set of responses for the previous utterance of an interlocutor. The main results of this work are a method that can automati-cally produce appropriate responses to utterances in some cases, and an estimate of what percentage of dialog may be amenable to such techniques. Alm et al. (1992) discuss how AAC technology can increase social interaction by having the utterance, rather than the letter or word, be the basic unit of communication. Findings from conversational analysis suggest a number of utterances common to conversation, including short conversational openers and closers (hello, goodbye), backchannel responses (yeah?), and quickfire phrases (That's too bad.). Indeed "small talk" is central to smooth-flowing conversation (King et al., 1995) . Many modern AAC systems therefore provide canned small-talk phrases (Alm et al., 1993; Todman et al., 2008) . | 0 |
NLP is the subset of AI that is focused on the scientific study of linguistic phenomena (Association for Computational Linguistics, 2021). Humancomputer interaction (HCI) is "the study and practice of the design, implementation, use, and evaluation of interactive computing systems" (Rogers, 2012). Grudin described HCI and AI as two fields divided by a common focus (Grudin, 2009) : While both are concerned with intelligent behavior, the two fields have different priorities, methods, and assessment approaches. In 2009, Grudin argued that while AI research traditionally focused on longterm projects running on expensive systems, HCI is focused on short-term projects running on commodity hardware. For successful HCI+NLP applications, a synthesis of both approaches is necessary. As a first step towards this goal, this article, informed by our sensibility as HCI researchers, provides five concrete methods from HCI to study the design, implementation, use, and evaluation of HCI+NLP systems.One promising pathway for fostering interdisciplinary collaboration and progress in both fields is to ask what each field can learn from the methods of the other. On the one hand, while HCI directly and deeply involves the end-users of a system, NLP involves people as providers of training data or as judges of the output of the system. On the other hand, NLP has a rich history of standardised evaluation metrics with freely available datasets and comparable benchmarks. HCI methods that enable deep involvement are needed to better understand the perspective of people using NLP, or being affected by it, their experiences, as well as related challenges and benefits.As a synthesis of this user focus and the standardized benchmarks, HCI+NLP systems could combine more standardized evaluation procedures and material (data, tasks, metrics) with user involvement. This could lead to better comparability and clearer measures of progress. This may also spur systematic work towards "grand challenges", that is, uniting HCI researchers under a common goal (Kostakos, 2015) .To facilitate a productive collaboration between HCI+NLP, clearly defined tasks that attract a large number of researchers would be helpful. These tasks could be accompanied with data to train models, as a methodological approach from NLP, and methodological recommendations on how to evaluate these systems, as a methodological approach from HCI. One task could e.g. define which questions should be posed to experiment participants. If the questions regarding the evaluation of an experiment are fixed, the results of different experiments could be more comparable. This would not only unite a variety of research results, but it could also increase the visibility of the researchers who participate. Complementary, NLP could benefit from asking further questions about use cases and usage contexts, and from subsequently evaluating contributions in situ, including use by the intended target group (or indirectly affected groups) of NLP.In conclusion, both fields stand to gain an enriched set of methodological procedures, prac-Method Description 1. User-Centered NLP user studies ensure that users understand the output and the explanations of the NLP system 2. Co-Creating NLP deep involvement from the start enables users to actively shape a system and the problem that the system is solving 3. Experience Sampling richer data collected by (active) users enables a deeper understanding of the context and the process in which certain data was created 4. Crowdsourcing an evaluation at scale with humans-in-the-loop ensures high system performance and could prevent biased results or discrimination 5. User Models simulating real users computationally can automate routine evaluation tasks to speed up the development tices, and tools. In the following, we propose five HCI+NLP methods that we consider useful in advancing research in both fields. Table 1 provides a short description of each of the five HCI+NLP methods that this paper highlights. With our nonexhaustive overview, we hope to inspire interdisciplinary discussions and collaborations, ultimately leading to better interactive NLP systems -both "better" in terms of NLP capabilities and regarding usability, user experience, and relevance for people. | 0 |
NLP is a relatively new area of involvement in the context of Nepal.The first ever NLP works in Nepal include the Nepali Spell Checker and Thesaurus that got released in the year 2005. The years after that saw an increasing amount of Research and Development of NLP resources and applications under different programs.This included Dobhase 1 , an English to Nepali Machine Translation System, Stemmer and Morphological Analyzer, Parts-of-Speech(POS) Tagger, Chunker, Parser, a corpus-based on-line Nepali monolingual dictionary, Text-To-Speech etc. On the resources front, by 2008, we have had developed a Lexicon, Nepali Written Corpus,Parallel Corpus, POS Tagset,Speech Recordings etc. In the sections that follow, we will be discussing over the current achievements and the possible advanced applications that can be developed on the basis of the existing resources and applications. | 0 |
Irony is a form of figurative language, considered as "saying the opposite of what you mean", where the opposition of literal and intended meanings is very clear (Barbieri and Saggion, 2014; Liebrecht et al., 2013) . Traditional approaches in NLP (Tsur et al., 2010; Barbieri and Saggion, 2014; Karoui et al., 2015; Farías et al., 2016) model irony based on pattern-based features, such as the contrast between high and low frequent words, the punctuation used by the author, the level of ambiguity of yay its fucking monday life is so perfect and magical i love everything Label: ironic by clash b e a u t i f u l w a y t o s t a r t m y m o r n i n g .Figure 1: Attention heat-map visualization. The color intensity of each word / character, corresponds to its weight (importance), as given by the self-attention mechanism (Section 2.6).the words and the contrast between the sentiments. Also, (Joshi et al., 2016) recently added word embeddings statistics to the feature space and further boosted the performance in irony detection.Modeling irony, especially in Twitter, is a challenging task, since in ironic comments literal meaning can be misguiding; irony is expressed in "secondary" meaning and fine nuances that are hard to model explicitly in machine learning algorithms. Tracking irony in social media posses the additional challenge of dealing with special language, social media markers and abbreviations. Despite the accuracy achieved in this task by handcrafted features, a laborious feature-engineering process and domain-specific knowledge are required; this type of prior knowledge must be continuously updated and investigated for each new domain. Moreover, the difficulty in parsing tweets (Gimpel et al., 2011) for feature extraction renders their precise semantic representation, which is key of determining their intended gist, much harder.In recent years, the successful utilization of deep learning architectures in NLP led to alternative approaches for tracking irony in Twitter (Joshi et al., 2017; Ghosh and Veale, 2017) . (Ghosh and Veale, 2016) proposed a Convolutional Neural Network (CNN) followed by a Long Short Term Memory (LSTM) architecture, outperforming the state-of-the-art. (Dhingra et al., 2016) utilized deep learning for representing tweets as a sequence of characters, instead of words and proved that such representations reveal information about the irony concealed in tweets.In this work, we propose the combination of word-and character-level representations in order to exploit both semantic and syntactic information of each tweet for successfully predicting irony. For this purpose, we employ a deep LSTM architecture which models words and characters separately. We predict whether a tweet is ironic or not, as well as the type of irony in the ironic ones by ensembling the two separate models (late fusion). Furthermore, we add an attention layer to both models, to better weigh the contribution of each word and character towards irony prediction, as well as better interpret the descriptive power of our models. Attention weighting also better addresses the problem of supervising learning on deep learning architectures. The suggested model was trained only on constrained data, meaning that we did not utilize any external dataset for further tuning of the network weights.The two deep-learning models submitted to SemEval-2018 Task 3 "Irony detection in English tweets" (Van Hee et al., 2018) are described in this paper with the following structure: in Section 2 an overview of the proposed models is presented, in Section 3 the models for tracking irony are depicted in detail, in Section 4 the experimental setup alongside with the respective results are demonstrated and finally, in Section 5 we discuss the performance of the proposed models. Fig. 2 provides a high-level overview of our approach, which consists of three main steps: (1) the pre-training of word embeddings, where we train our own word embeddings on a big collection of unlabeled Twitter messages, (2) the independent training of our models: word-and char-level, Figure 2 : High-level overview of our approach | 0 |
The emergence of large collections of digitized spoken data has encouraged research in speech retrieval. Previous studies, notably those at TREC (Garafolo et al, 2000) , have focused mainly on well-structured news documents. In this paper we report on work carried out for the Cross-Language Evaluation Forum (CLEF) 2005 Cross-Language Speech Retrieval (CL-SR) track (White et al, 2005) . The document collection for the CL-SR task is a part of the oral testimonies collected by the USC Shoah Foundation Institute for Visual History and Education (VHI) for which some Automatic Speech Recognition (ASR) transcriptions are available (Oard et al., 2004) . The data is conversional spontaneous speech lacking clear topic boundaries; it is thus a more challenging speech retrieval task than those explored previously. The CLEF data is also annotated with a range of automatic and manually generated sets of metadata. While the complete VHI dataset contains interviews in many languages, the CLEF 2005 CL-SR task focuses on English speech. Cross-language searching is evaluated by making the topic statements (from which queries are automatically formed) available in several languages. This task raises many interesting research questions; in this paper we explore alternative term weighting methods and content indexing strategies.The remainder of this paper is structured as follows: Section 2 briefly reviews details of the CLEF 2005 CL-SR task; Section 3 describes the system we used to investigate this task; Section 4 reports our experimental results; and Section 5 gives conclusions and details for our ongoing work. | 0 |
Modern Standard Arabic (MSA) is the lingua franca for the Arab world. Arabic speakers generally use dialects in daily interactions. There are 6 dominant dialects, namely Egyptian, Moroccan, Levantine, Iraqi, Gulf, and Yemeni 1 . The dialects may differ in vocabulary, morphology, syntax, and spelling from MSA and most lack spelling conventions.Different dialects often make different lexical choices to express concepts. For example, the concept corresponding to "Oryd" ("I want") is expressed as "EAwz" in Egyptian, "Abgy" in Gulf, "Aby" in Iraqi, and "bdy" in Levantine 2 . Often, words have different or opposite meanings in different dialects.1 http://en.wikipedia.org/wiki/ Varieties_of_Arabic 2 All transliterations follow the Buckwalter scheme Arabic dialects may differ morphologically from MSA. For example, Egyptian Arabic uses a negation construct similar to the French "ne pas" negation construct. The Egyptian word "mlEbt$" (or alternatively spelled ) ("I did not play") is composed of "m+lEbt+$".The pronunciations of letters often differ from one dialect to another. For example, the letter "q"is typically pronounced in MSA as an unvoiced uvular stop (as the "q" in "quote"), but as a glottal stop in Egyptian and Levantine (like "A" in "Alpine") and a voiced velar stop in the Gulf (like "g" in "gavel"). Differing pronunciations often reflect on spelling.Social media platforms allowed people to express themselves more freely in writing. Although MSA is used in formal writing, dialects are increasingly being used on social media sites. Some notable trends on social platforms include (Darwish et al., 2012) : -Mixed language texts where bilingual (or multilingual) users code switch between Arabic and English (or Arabic and French). In the example "wSlny mrsy"("got it thank you"), "thank you" is the transliterated French word "merci".-The use of phonetic transcription to match dialectal pronunciation. For example, "Sdq" ("truth") is often written as "Sj" in Gulf dialect.-Creative spellings, spelling mistakes, and word elongations are ubiquitous in social texts.-The use of new words like "lol" ("LOL"). -The attachment of new meanings to words such as using "THn" to mean "very" while it means "grinding" in MSA.The Egyptian dialect has the largest number of speakers and is the most commonly understood dialect in the Arab world. In this work, we focused on translating dialectal Egyptian to English us-1 ing Egyptian to MSA adaptation. Unlike previous work, we first narrowed the gap between Egyptian and MSA using character-level transformations and word n-gram models that handle spelling mistakes, phonological variations, and morphological transformations. Later, we applied an adaptation method to incorporate MSA/English parallel data.The contributions of this paper are as follows: -We trained an Egyptian/MSA transformation model to make Egyptian look similar to MSA. We publicly released the training data.-We built a phrasal Machine Translation (MT) system on adapted Egyptian/English parallel data, which outperformed a non-adapted baseline by 1.87 BLEU points.-We used phrase-table merging (Nakov and Ng, 2009) to utilize MSA/English parallel data with the available in-domain parallel data. | 0 |
Semantic textual similarity aims to capture whether the meaning of two texts are similar. This concept is somehow different from the textual similarity definition itself, because in the latter we are only interested in measuring the number of lexical components that the two texts share. Therefore, textual similarity can range from exact semantic equivalence to a complete unrelatedness pair of texts.Finding the semantic similarity between a pair of texts has become a big challenge for specialists in Natural Language Processing (NLP), because it has applications in some NLP task such as machine translation, automatic construction of summaries, authorship attribution, machine reading comprehension, information retrieval, among others, which usually need a manner to calculate degrees of similarity between two given texts.This work is licensed under a Creative Commons Attribution 4.0 International Licence. Page numbers and proceedings footer are added by the organisers. Licence details: http://creativecommons.org/licenses/by/4.0/ Semantic textual similarity can be calculated using texts of different sizes, for example between, a paragraph and a sentence, or a sentence and a phrase, or a phrase and a word, or even a word and a sense. When we consider this difference, we say the task is called "Cross-Level Semantic Similarity", but when this distinction is not considered, then we call the task just as "Semantic Textual Similarity".In this paper, we evaluate different features for determining those that obtain the best performances for calculating both, cross-level semantic similarity and multilingual semantic textual similarity.The remaining of this paper is structured as follows. Section 2 presents the features used in both experiments. Section 3 shows the manner we used the features for determining the degree of semantic textual similarity. Section 4, on the other hand, shows the experiments we have carried out for determining cross-level semantic similarity. Finally, in Section 5 the conclusions and findings are given. | 0 |
The recent years have seen unprecedented forward steps for Natural Language Processing (NLP) over almost every NLP subtask, relying on the advent of large data collections that can be leveraged to train deep neural networks. However, this progress has solely been observed in languages with significant data resources, while low-resource languages are left behind. The situation for endangered languages is usually even worse, as the focus of the scientific community mostly relies in language documentation. The typical endangered language documentation process typically includes the creation of language resources in the form of word lists, audio and video recordings, notes, or grammar fragments, with the created resources then stored into large online linguistics archives. This process is often hindered by the so-called Transcription Bottleneck, but recent advances (Ćavar et al., 2016; Adams et al., 2018) provide promising directions towards a solution for this issue. However, language documentation and linguistic description, although extremely important itself, does not meaningfully contribute to language conservation, which aims to ensure that the language stays in use. We believe that a major avenue towards continual language use is by further creating language technologies for endangered languages, essentially elevating them to the same level as high-resource, economically or politically stronger languages. The majority of the world's languages are categorized as synthetic, meaning that they have rich morphology, be it fusional, agglutinative, polysynthetic, or a mixture thereof. As Natural Language Processing (NLP) keeps expanding its frontiers to encompass more and more languages, modeling of the grammatical functions that guide language generation is of utmost importance. It follows, then, that the next crucial step for expanding NLP research on endangered languages is creating benchmarks for standard NLP tasks in such languages. With this work we take a small first step towards this direction. We present a resource that allows for benchmarking two NLP tasks in San Juan Quiahije Chatino, an endangered language spoken in southern Mexico: morphological analysis and morphological inflection, with a focus on the verb morphology of the language. We first briefly discuss the Chatino language and the intricacies of its verb morphology ( §2), then describe the resource ( §3), and finally present baseline results on both the morphological analysis and the inflection tasks using stateof-the-art neural models ( §4). We make our resource publicly available online 1 . | 0 |
This paper deals with the automatic referent resolution of deictic and anaphoric expressions in a research prototype of a multimodal user interface called EDWARD. The primary aim of our project is the development and the assessment of an interface that combines the positive features of the language mode and the action mode of interaction (Claassen et al. 1990 ). EDWARD Bos et al. 1994) integrates a graphical graph-editor called Gr 2 (Bos in press) and a Dutch natural language (NL) dialogue system called DoNaLD (Claassen and Huls 1991) . One of the application domains involves a file system environment with documents, authors, a garbage container, and so on. The user can interact with EDWARD by manipulating the graphical representation of the file system (a directed graph), by menus, by written natural or formal language, or by combinations of these. EDWARD responds in NL (either written or spoken) and graphics.In this paper we will go into the semantic and pragmatic processes involved in the referent resolution of deictic and deixis-related expressions by EDWARD. (Syntactic issues will not be discussed here; for these, see Claassen and Huls 1991 .) The proper interpretation of deictic expressions depends on the identity of the speaker(s) and the audience, the time of speech, the spatial location of speaker and audience at the time of speech, and non-linguistic communicative acts like facial expressions and eye, hand, and body movements. Lyons (1977, p. 637) , provides the following definition of deixis: the location and identification of persons, objects, events, processes and activities being talked about, or referred to, in relation to the spatiotemporal context created and sustained by the act of utterance and the participation in it, typically, of a single speaker and at least one addressee.In the context of the present paper, we distinguish three types of deixis: personal, temporal, and spatial deixis. Personal deixis involves first-and second-person pronouns (e.g., I, we, and you). Temporal deixis is realized by the tense system of a language (e.g., he lives in Amsterdam) and by temporal modifiers (e.g., in an hour). Temporal deixis relates the time of speech to the relation(s) expressed by the utterance. Spatial deixis involves demonstratives or other referring expressions that are produced in combination with a pointing gesture (e.g., thisS file, in which 7 represents the pointing gesture). In the present paper, most attention will be given to spatial deixis.Deictic expressions can be contrasted with anaphors. Unlike deictic expressions, anaphors can be interpreted without regard to the spatiotemporal context of the speaking situation. Their interpretation depends merely on the linguistic expressions that precede them in the discourse. For example, this is an anaphor in Print the file about dialogue systems. Delete this. In many languages, the words used in deictic expressions are also used in anaphoric expressions.Deictic and anaphoric expressions frequently cause problems for NL analysis. Sijtsma and Zweekhorst (1993) find referent resolution errors in all three commercial NL interfaces they evaluate. In research laboratories, a couple of systems capable of interpreting deictic expressions recently have been developed. Allgayer et al. (1989) describe XTRA, a German NL interface to expert systems, currently applied to supporting the user's filling out a tax form. XTRA uses a dialogue memory and a tax-form hierarchy to interpret multimodal referring expressions. Data from the dialogue memory and from gesture analysis are combined (e.g., by taking the intersection of two sets of potential referents suggested by these information sources). Neal and Shapiro (1991) describe a research prototype called CUBRICON, which combines NL (English) with graphics. The application domain is military tactical air control. Like XTRA, CUBRI-CON uses two models to interpret deictic expressions: an attentional discourse focus space representation (adapted from Grosz and Sidner 1986 ) and a display model. Stock (1991) describes ALFresco, a prototype built for the exploration of frescoes, using NL (Italian) and pictures. For referent resolution in ALFresco, topic spaces (Grosz, 1978) are combined with Haji~ov~'s (1987) approach, in which entities are assumed to "fade away" slowly. Cohen (1992) presents Shoptalk, a prototype information and decisionsupport system for semiconductor and printed-circuit board manufacturing with a NL (English) component. In Shoptalk too, the interpretation process is based on the approach of Grosz and Sidner. We believe that the fact that these systems use two separate mechanisms for modeling linguistic and perceptual context is a disadvantage over the use of only one mechanism for referent resolution. From a computational and an engineering position, one mechanism that handles both deictic and anaphoric expressions in the same way is preferable.We will (try to) show how both deictic and anaphoric references can be resolved using a single model. We have used the framework presented by Alshawi (1987) to develop a general context model that is able to represent linguistic as well as non-linguistic effects on the dialogue context. This model is used, in conjunction with a knowledge base, by EDWARD's interpretation component to solve deictic and anaphoric referring expressions. The same model and knowledge base are used by EDWARD's generation component to decide the form (e.g., he, the writer, a man), the The main components of EDWARD.content (e.g., the writer, the husband), and the mode (e.g., linguistic or simulated pointing gesture; Claassen 1992; Claassen et al. 1993 ) of referring expressions. In this paper, however, we focus on the use of the context model to resolve deictic and anaphoric expressions keyed in by the user. The rest of this paper is structured as follows: in Section 2, we present an overview of EDWARD. Next, we describe the knowledge sources EDWARD uses to interpret deictic and anaphoric expressions (Section 3). In Section 4, we go into the process of interpreting deictic and anaphoric expressions in some detail. Subsequently, in Section 5, we present some user interactions with EDWARD and we compare the results of ED-WARD's referent resolution model with two other models including that of Grosz and Sidner (1986) . | 0 |
Document clustering is an aggregation of documents by discriminating the relevant documents from the irrelevant documents. The relevance determination criteria of any two documents is a similarity measure and the representatives of the documents [1, 2, 3, 4] . There are some similarity measures such as Dice coefficient, Jaccard's coefficient, and cosine measure. These similarity measures require that the documents are represented in document vectors and the similarity of two documents is calculated from the operation of document vectors.In general, the representatives of a document or a cluster are document vectors that consist of <term, weight> pairs and the document similarities are determined by the terms and their weighting values that are extracted from the document [7, 9] . In the previous studies on the document clustering, we focused on the clustering algorithm, but the document representation methodology was not the important issue. Document vectors are simply constructed from the term frequency (TF) and the inverted document frequency (IDF). This representation of term weighting method starts from the precondition that terms or keywords representing the document are calculated by TF-IDF. Term weighting method by TF-IDF is generally used to construct a document vector, but we cannot say that it is the best way of representing a document. So, we suppose that there is a limitation to improve the accuracy of the clustering system only by improving the clustering algorithm without changing the document/cluster representation method.Also, document clustering requires a large amount of memory spaces to keep the representatives of documents/clusters and the similarity measures [6, 8, 10] . Given N documents to be clustered, N × N similarity matrix is needed to store document similarity measures. Also, the recursive iteration of similarity calculation and reconstructing the representative of the clusters need a huge number of computations.In this paper, we propose a new clustering method that is based on the keyword weighting approach. The clustering algorithm starts from the seed documents and the cluster is expanded by the keyword relationship. The evolution of the cluster stops when no more documents are added to the cluster and irrelevant documents are removed from the cluster candidates.The first step of the clustering algorithm is a creation and initialisation of a new cluster. A document is selected that does not belong to any other cluster, and it is assigned to a new cluster that is an initial stateD 0 i C of cluster . i C } D ∪ = C = all s = ( if for + j 0 } { 0 D C i =At this time, a document that is the first document in the new cluster is called a seed document (or an initialisation document). The seed document is randomly selected among the documents that do not belong to the clusters ~ . Keyword set of a document D is a set of keywords k is initialised by the key word set of the seed document. In the expanding step of the cluster, the cluster is expanded by adding more related documents to the cluster, that include the keywords of the seed document as the related documents of the seed document. That is, adding the total documents that appear each keyword of i (the keyword extracted from the seed document) to the cluster C that is the next state of cluster C expands the cluster.D 1 C 0 i C 1 − i C D K 1 , k 2 ,D K D K D C K K i = 0 D K = { k | k is a keyword that is extracted from D } { 0 D C i = 0 C K K i = 1 = i C { D x | document D x , where x D K k ∈ 0 for } that such i C K k k ∈ ∀ 1 = j { do j i x D j C C D K K x i ∈ where , j j+1 i i C begin for x j i C D ∈ j ) , ( C x i K D sim ) threshold s < C } { 1 1 x j i j i D C − = + + end 1 = j ) () ( } cument isDeleteDo while j i i C C =i C i 0 C K 0 C K 1 i i } 0 i C j C j i C j i C 1 + i | x D K ∪ j C K x D ∈ 0 C where , j i j i C j i { 1 , i K k k C ∈ =The cluster expansion is performed by the iteration of keyword expansion and cluster expansion. More documents are added to a cluster by the similarity evaluation between the keyword set and the document. If a new document is added to a cluster, then the keywords in the added document are also added to the keyword set of the cluster. The first expansion is performed by the keyword set extracted from the seed document. The second expansion is performed by new keywords that are added to a cluster as a result of the first expansion. And the i-th expansion is performed by the (i-1)-th state of the keyword set.The number of iterations is decided through the experiment. When a cluster is expanded from to , the keyword set i is also expanded to a new keyword set i that appears in the total documents of the cluster . The keyword set i of is a union of the total keyword sets of .0 i C j i C 1 i C K 1 C K 1 i C K j i C x D i C D K x i ∈ =The keyword set i of the cluster is used to calculate the characteristic vector of each cluster. The characteristic vector is constituted the weight value calculated by term frequency (TF) and inverted document frequency (IDF) of the keywords and this is used to calculate the similarity measure between a document and the cluster. | 0 |
A traditional cross-lingual information retrieval (CLIR) system consists of two components: machine translation and monolingual information retrieval (Nie, 2010) . The idea is to solve the translation problem first, then the crosslingual IR problem become monolingual IR. However, the performance of translation-based approaches is limited by the quality of the machine translation and it needs to handle to translation ambiguity (Zhou et al., 2012) . One possible solution is to consider the translation alternatives of individual words of queries or documents as in (Zbib et al., 2019; Xu and Weischedel, 2000) , which provides more possibilities for matching query words in relevant documents compared to using single translations. But the alignment information is necessarily required in the training stage of the CLIR system to extract target-source word pairs from parallel data and this is not a trivial task. To achieve good performance in IR, deep neural networks have been widely used in this task. These approaches can be roughly divided into two categories. The first class of approaches uses pretrained word representations or embeddings, such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , directly to improve IR models. Usually these word embeddings are pretrained on large scale text corpora using co-occurrence statistics, so they have modeled the underlying data distribution implicitly and should be helpful for building discriminative models. (Vulic and Moens, 2015) and (Litschko et al., 2018) used pretrained bilingual embeddings to represent queries and foreign documents, and then ranked documents by cosine similarity. (Zheng and Callan, 2015) used word2vec embeddings to learn query term weights. However, their training objectives of trained neural embeddings are different from the objective of IR. The second set of approaches design and train deep neural networks based on IR objectives. These methods have shown impressive results on monolingual IR datasets (Xiong et al., 2017; Guo et al., 2016; Dehghani et al., 2017) . They usually rely on large amounts of query-document relevance annotated data that are expensive to obtain, especially for low-resource language pairs in crosslingual IR tasks. Moreover, it is not clear whether they generalize well when documents and queries are in different languages. Recently multiple pretrained language models have been developed such as BERT (Devlin et al., 2019) and XL-Net (Yang et al., 2019) , that model the underlying data distribution and learn the linguistic patterns or features in language. These models have outperformed traditional word embeddings on various NLP tasks (Yang et al., 2019; Devlin et al., 2019; Peters et al., 2018; Lan et al., 2019) . These pretrained models also provided new opportunities for IR. Therefore, several recent works have successfully applied BERT pretrained models for monolingual IR (Dai and Callan, 2019; Akkalyoncu Yilmaz et al., 2019) and passage re-ranking (Nogueira and Cho, 2019) . In this paper, we extend and apply BERT as a ranker for CLIR. We introduce a cross-lingual deep relevance matching model for CLIR based on BERT. We finetune a pretrained multilingual model with home-made CLIR data and obtain very promising results. In order to finetune the model, we construct a large amount of training data from parallel data, which is mainly used for machine translation and is much easier to obtain compared to the relevance labels of query-document pairs. In addition, we don't require the source-target alignment information to construct training samples and avoid the quality issues of machine translation in traditional CLIR. The entire model is specifically optimized using a CLIR objective. Our main contributions are:• We introduce a cross-lingual deep relevance architecture with BERT, where a pretrained multilingual BERT model is adapted for cross-lingual IR.• We define a proxy CLIR task which can be used to easily construct CLIR training data from bitext data, without requiring any amount of relevance labels of query-document pairs in different languages. 2. Our approach 2.1. Motivation BERT (Devlin et al., 2019) is the first bidirectional language model, which makes use of left and right word contexts simultaneously to predict word tokens. It is trained by optimizing two objectives: masked word prediction and next sentence prediction. As shown in Figure 1 , the inputs are a pair of masked sentences in the same language, where some tokens in the both sentences are replaced by symbol ' [Mask] '. The BERT model is trained to predict these masked tokens, by capturing within or across sentence meaning (or context), which is important for IR. The second objective aims to judge whether the sentences are consecutive or not. It encourages the BERT model to model the relationship between two sentences. The self-attention mechanism in BERT models the local interactions of words in sentence A with words in sentence B, so it can learn pairwise sentence or word-token relevance patterns. The entire BERT model is pretrained on large scale text corpora and learns linguistic patterns in language. So search tasks with little training data can still benefit from the pretrained model. Finetuning BERT on search task makes it learn IR specific features. It can capture query-document exact term matching, bi-gram features for monolingual IR as introduced in (Dai and Callan, 2019) . Local matchings of words and ngrams have proven to be strong neural IR features. Bigram modeling is important, because it can learn the meaning of word compounds (bi-grams) from the meanings of individual words. Motivated by this work, we aim to finetune the pretrained BERT model for cross-lingual IR. dict the relevance score, which is the probability, p(q|s), of query q occurring in sentence s. There are three types of parameterized layers in this model: | 0 |
Heavy NP Shift (HNPS) refers to the tendency that long or phonologically "heavy" phrases are shifted to positions other than where they canonically occur. An English HNPS sentence is shown in (1a). The canonically word-ordered or "unshifted" version of (1a) is the sentence in (1b). When the object NP is short, however, shifted word order is marked, as shown in (1c).Popular analyses of HNPS include: rightward movement of NP (Ross 1986) , where the heavy NP moves to the right edge of the constituent; the PP movement analysis (Kayne 1994) , where the PP leftward moves; and the remnant movement analysis (Rochemont and Culicover 1997) , where the heavy NP moves first, followed by movement of the "remnant" VP. The above analyses are schematized in (2-4) respectively. These syntactic analyses, distinct as they are, are equally successful in deriving the English HNPS word order. However, since structural properties of a sentence predicts how hard it is for humans to process it, it is unclear what processing predictions these analyses make, nor is it clear whether these predictions are born out in observed human processing preferences.Psycholinguistic studies on human sentence processing have shown that sentences with HNPS word order are preferred in production over the canonical word order when the NP is long (Stallings et al. 1998) . Additionally, it has been observed that the likelihood of shifting heavy NPs relates not only to the length of NPs, but of PPs as well. As the length of a PP increases, i.e., as the length difference between the NP and the VP decreases, HNPS is less likely to happen (Stallings and MacDonald 2011) . It is then interesting to explore whether and how well these psycholinguistic findings are predicted by a given structural analysis.Minimalist Grammar (MG) parsing (Stabler 2013 , Graf et al. 2017 ) provides a quantitative way to answer precisely these questions. As it will hopefully become clear, It is possible to infer and compare processing difficulties that associate with syntactic structures by observing the parser's behavior when conjecturing those structures. This enables us to see whether the reported human processing findings are expected when a certain syntactic structure is assumed.In this paper, I investigate processing predictions the three aforementioned structural proposals make from the perspective of Minimalist parsing. I will show that the parser's behavior suggest that the rightward movement analysis correctly predict processing biases based on memory usage. PP movement and remnant movement analyses make correct predictions when unpronounced nodes are ignored by the memory usage calculation. Moreover, when contrasted with previous studies on complexity metrics (Graf et al. 2015 , Zhang 2017 , Graf et al. 2017 , the same set of metrics that isd making correct processing predictions for relative clauses works for HNPS structures as well.This paper is structured as follows. Section 2 introduces Minimalist Grammars, parsers as well as complexity metrics. Section 3 discusses how the comparisons are set up and what the results are. I conclude the paper with discussions in section 4. | 0 |
The natural reference for AI systems is human behavior. In human social life, emotional intelligence is important for successful and effective communication. Humans have the natural ability to comprehend and react to the emotions of their communication partners through vocal and facial expressions (Kotti and Paternò, 2012; Poria et al., 2014a) . A long-standing goal of AI has been to create affective agents that can recognize, interpret and express emotions. Early-stage research in affective computing and sentiment analysis has mainly focused on understanding affect towards entities such as movie, product, service, candidacy, organization, action and so on in monologues, which involves only one person's opinion. However, with the advent of Human-Robot Interaction (HRI) such as voice assistants and customer service chatbots, researchers have started to build empathetic dialogue systems to improve the overall HRI experience by adapting to customers' sentiment.Sentiment study of Human-Human Interactions (HHI) can help machines identify and react to human non-verbal communication which makes the HRI experience more natural. The call center is a rich resource of communication data. A large number of calls are recorded daily in order to assess the quality of interactions between CSRs and customers. Learning the sentiment expressions from well-trained CSRs during communication can help AI understand not only what the user says, but also how he/she says it so that the interaction feels more human. In this paper, we target and use real-world data -service calls, which poses additional challenges with respect to the artificial datasets that have been typically used in the past in multimodal sentiment researches (Cambria et al., 2017) , such as variability and noises. The basic 'sentiment' can be described on a scale of approval or disapproval, good or bad, positive or negative, and termed polarity (Poria et al., 2014b) . In the service industry, the key task is to enhance the quality of services by identifying issues that may be caused by systems of rules, or service qualities. These issues are usually expressed by a caller's anger or disappointment on a call. In addition, service chatbots are widely used to answer customer calls. If customers get angry during HRI, the system should be able to transfer the customers to a live agent. In this study, we mainly focuses on identifying 'negative' sentiment, especially 'angry' customers. Given the non-homogeneous nature of full call recordings, which typically include a mixture of negative, and nonnegative statements, sentiment analysis is addressed at the sentence level. Call segments are explored in both acoustic and linguistic modalities. The temporal sentiment patterns between customers and CSRs appearing in calls are described. The paper is organized as follows: Section 2 covers a brief literature review on sentiment recognition from different modalities; Section 3 proposes a pipeline which features our novelties in training data creation using real-world multi-party conversations, including a description of the data acquisition, speaker diarization, transcription, and semisupervised learning annotation; the methodologies for acoustic and linguistic sentiment analysis are presented in Section 4; Section 5 illustrates the methodologies adopted for fusing different modalities; Section 6 presents experimental results including the evaluation measures and temporal sentiment patterns; finally, Section 7 concludes the paper and outlines future work. | 0 |
Chinese word segmentation has always been a difficult and challenging task in Chinese language processing. Several Chinese morphological analysis systems have been developed by different research groups and they all have quite good performance when doing segmentation of written Chinese. But there still remain some problems. The biggest one is that each research group has its own segmentation standard for their system, which means that there is no single segmentation standard for all tagged corpora which can be agreeable across different re-search groups. And we believe that this situation slows down the progress of Chinese NLP research.Among all the differences of segmentation standards, the segmentation method for Chinese synthetic words is the most controversial part because Chinese synthetic words have a quite complex structure and should be represented by several segmention levels according to the needs of upper applications such as MT, IR and IME.For instance, a long(upper level) segmentation unit may simplify syntactic analysis and IME application but a small(lower level) segmentation unit might be better for information retrieval or wordbased statistical summarization. But for now, no Chinese morphological analysis system can do all kinds of these work with only one segmentation standard.Furthermore, although every segmentation system has good performance, in the analysis of real world text, there are still many out-of-vocabulary words which could not be easily recognized because of the flexibility of Chinese synthetic word construction, especially proper names that could always appear as synthetic words.In order to make our Chinese morphological analysis system to recognize more out-of-vocabulary words and to fit different kinds of NLP applications, we try to analyze the structure of the internal information of Chinese synthetic words, categorize them into semantic and syntactic types and store these information into a synthetic word dictionary by representing them with a kind of tree structure built on our system dictionary.In this paper, we first make the definition of Chi-nese synthetic words and classify them into several categories in Section 2. In Section 3, two previous researches on Chinese synthetic words will be introduced. Then we propose a tree-based method for analyzing Chinese synthetic words and make a survey focused on 3-character morphological derived words to get the features for future machine learning process. In Section 4, we do an experiment by using SVM classifier to annotate 3-character morphologically derived words. Finally, Section 5 shows how this method could benefit Chinese morphological analysis and our future work.2 Detailed study of Chinese synthetic words | 0 |
Recent studies on meaning representation parsing (MRP) have focused on different semantic graph frameworks (Oepen et al., 2019 ) such as bilexical semantic dependency graphs (Peng et al., 2017; Wang et al., 2018; Dozat and Manning, 2018; Na et al., 2019) , a universal conceptual cognitive annotation (Hershcovich et al., 2017 (Hershcovich et al., , 2018 , abstract meaning representation (Wang and Xue, 2017; Guo and Lu, 2018; Song et al., 2019; Lam, 2019, 2020; Zhou et al., 2020) , and a discourse representation structure (Abzianidze et al., 2019; van Noord et al., 2018; Liu et al., 2019; Evang, 2019; Liu et al., 2020) . To jointly address various semantic graphs, the aim of the Cross-Framework MRP task at the 2020 Conference (MRP 2020) on Computational Natural Language Learning (CoNLL) is to develop semantic graph parsing across the following five frameworks (Oepen et al., 2020) : 1) EDS: Elementary Dependency Structures (Oepen and Lønning, 2006) , 2) PTG: Prague Tectogrammatical Graphs (Hajič et al., 2012) , 3) UCCA: Universal Conceptual Cognitive Annotation (Abend and Rappoport, 2013) , 4) AMR: Abstract Meaning Representation (Banarescu et al., 2013) , and 5) DRG: Discourse Representation Graphs (Abzianidze et al., 2017) .For MRP 2020, we address only the AMR framework and present a joint state model for graphsequence iterative inference, as a simple extension of (Cai and Lam, 2020) . The graph-sequence iterative model of (Cai and Lam, 2020) incrementally constructs an AMR graph starting from an empty graph G 0 by alternatively applying two modules: 1) Concept Solver, which uses a previous graph hypothesis G i to predict a new concept, and 2) Relation Solver, which uses a previous concept hypothesis to predict relations for the new concept.The dual-state model of (Cai and Lam, 2020) deploys two state vectors x t and y t for the graphsequence iterative inference, which refers to the t-th sequence hypothesis and t-th graph hypothesis, respectively. Unlike the dual state model, we instead maintain a joint state vector z t , which encodes both sequence and graph hypotheses to apply a graph-sequence iterative inference in a simple and unified manner. During the iterative inference stage, we take the current joint state vector as a query vector and update the next joint state vector by applying attention mechanisms both to the text (i.e., sequence memory) and graph (i.e., graph memory) parts separately. The final joint state vector is then passed to the concept and relation solvers, which predict new concepts and their relations, respectively, as with the dual state model by (Cai and Lam, 2020) .We submitted the results of our AMR parsing model during the post-evaluation stage and ranked between 3rd and 4th place among the participants in the official results under the cross-framework metric 1 .The remainder of this paper is organized as follows: Section 2 presents the detailed architecture of our system. Section 3 describes the detailed process used for training biaffine attention models. Section 4 provides the official results of MRP 2020. Finally, some concluding remarks and a description of future research are given in Section 5. Figure 1 shows the neural architecture based on the joint state model for a graph-sequence iterative inference. | 0 |
In recent years there has been increasing interest in improving the quality of SMT systems over a wide range of linguistic phenomena, including coreference resolution (Hardmeier et al., 2014) and modality (Baker et al., 2012) . Negation, however, is a problem that has still not been researched thoroughly (section 2).Our previous study (Fancellu and Webber, 2015) takes a first step towards understanding why negation is a problem in SMT, through manual analysis of the kinds of errors involved in its translation. Our error analysis employs a small set of standard stringbased operations, applying them to the semantic elements involved in the meaning of negation (section 3).The current paper describes our current work on understanding the causes of these errors. Focussing on the distinction between induction, search and model errors, we point out the challenges in trying to use existing techniques to quantify these three types of errors in the context of translating negation.Previous work on ascribing errors to induction, search, or model has taken an approach using oracle decoding, i.e. forcing the decoder to reconstruct the reference sentence as a proxy to analyse its potentiality. We show however that this technique does not suit well semantic phenomena with local scope (such as negation), given that a conclusion drawn on the reconstruction of an entire sentence might refer to spans not related to these. Moreover, as in previous work, we stress once again the limitation of using a single reference to compute the oracle (section 4.1)To overcome these problems, we propose the use of an oracle hypothesis, instead of an oracle sentence, that relies uniquely on the negation elements contained in the source span and how these are expected to be translated in the target hypothesis at a given time during decoding (section 4.2).Sections 5 and 6 report results of the analysis on a Chinese-to-English Hierarchical Phrase Based Model (Chiang, 2007) . We show that even if it possible to detect the presence of model errors through the use of an oracle sentence, computing an oracle hypotheses at each step during decoding offers a more robust, in-depth analysis around the problem of translating negation and helps explaining the errors observed during the manual analysis. | 0 |
With the growing amount of textual data available in the Internet, unsupervised methods for natural language processing gain a considerable amount of interest. Due to the very special usage of language, supervised methods trained on high quality corpora (e. g. containing newspaper texts) do not achieve comparable accuracy when being applied to data from fora or blogs. Huge annotated corpora consisting of sentences extracted from the Internet barely exist until now.Consequential a lot of effort has been put into unsupervised grammar induction during the last years and results and performance of unsupervised parsers improved steadily. Klein and Manning (2002) 's constituent context model (CCM) obtains 51.2% f-score on ATIS part-of-speech strings. The same model achieves 71.1% on Wall Street Journal corpus sentences with length of at most 10 POS tags. In (Klein and Manning, 2004) an approach combining constituency and dependency models yields 77.6% f-score. Bod (2006) 's all-subtree approach -known as Data-Oriented Parsing (DOP) -reports 82.9% for UML-DOP. Seginer (2007) 's common cover links model (CCL) does not need any prior tagging and is applied on word strings directly. The f-score for English is 75.9%, and for German (NEGRA10) 59% is achieved. Hänig et al. (2008) present a cooccurrence based constituent detection algorithm which is applied to word forms, too (unsupervised POS tags are induced using unsuPOS, see (Biemann, 2006) ). An f-score of 63.4% is reported for German data.In this paper, we want to present a new unsupervised co-occurrence based grammar induction model based on Hänig et al. (2008) . In the following section, we give a short introduction to the base algorithm unsuParse. Afterwards, we present improvements to this algorithm. In the final section, we evaluate the proposed model against existing ones and discuss the results. | 0 |
CLPsych 2022 Shared Task (Tsakalidis et al., 2022a) introduces the problem of assessing changes in a person's mood over time on the basis of their linguistic content (Tsakalidis et al., 2022b) . The purpose of the organisers is to focus on posting activity in online social media platforms. In particular, given a user's posts over a certain period in time, the aim of the task is to capture those sub-periods during which a user's mood deviates from their baseline mood and to use this information to predict the suicide risk at user level. Thus, the CLPsych 2022 Shared Task consists of the two subtasks: (1) Identify mood changes in users' posts over time and;(2) Show how subtask A can help to assess the risk level of a user.This paper presents our participation in the subtasks T1 and T2.The dataset (Tsakalidis et al., 2022b) provided by the organisers is composed of social media messages obtained from various sources (Losada and Crestani, 2016; Losada et al., 2020; Zirikly et al., 2019; Shing et al., 2018) .Specifically, the dataset is composed of 256 timelines from Reddit obtained from 186 users who at some point in time have written in subreddits related to mental health. In total, there are more than 6K posts obtained in a time range of about two months. In the annotation process, timelines were manually checked for content related to mood changes. Four annotators were employed for this task.In terms of evaluation, three types of evaluation measures were used: traditional classification metrics, timeline-based classification metrics, and coverage-based metrics. | 0 |
The notion of word sense is central to computational lexical semantics. Word senses can be either encoded manually in lexical resources or induced automatically from text. The former knowledgebased sense representations, such as those found in the BabelNet lexical semantic network (Navigli and Ponzetto, 2012) , are easily interpretable by humans due to the presence of definitions, usage examples, taxonomic relations, related words, and images. The cost of such interpretability is that every element mentioned above is encoded manually in one of the underlying resources, such as Wikipedia. Unsupervised knowledge-free approaches, e.g. (Di Marco and Navigli, 2013; Bartunov et al., 2016) , require no manual labor, but the resulting sense representations lack the abovementioned features enabling interpretability. For instance, systems based on sense embeddings are based on dense uninterpretable vectors. Therefore, the meaning of a sense can be interpreted only on the basis of a list of related senses. We present a system that brings interpretability of the knowledge-based sense representations into the world of unsupervised knowledge-free WSD models. The contribution of this paper is the first system for word sense induction and disambiguation, which is unsupervised, knowledge-free, and interpretable at the same time. The system is based on the WSD approach of and is designed to reach interpretability level of knowledge-based systems, such as Babelfy (Moro et al., 2014) , within an unsupervised knowledgefree framework. Implementation of the system is open source. 1 A live demo featuring several disambiguation models is available online. 2 | 0 |
In Natural Language Processing (NLP), word segmentation is the commencement of Part-of-Speech (POS) tagging, semantic role labeling (SRL), and other similar studies. Particularly for Chinese, Japanese and Korean languages, the absence of explicit boundaries between characters makes the Word Segmentation (WS) task indispensable in NLP tasks. Dominant word segmentation methods considered WS as a sequence tagging task (Xue, 2003) . Various tagging schemas such as "BMES" (Begin, Middle, End, Single), "BIES" (Begin, Inside, End, Single), "SEP-APP" (Separate, Append), "BI" (Begin, Inside), and "START-NONSTART" were employed to tackle the sequence labeling task. These tagging schemas are all characterbased and summarized as four-tags ("BMES", "BIES") and two-tags ("SEP-APP", "BI" "START-NONSTART"). Despite diverse tagging schemas, they all carry implicit position information. For four-tags tagging schemas, the implicit information restricts the transition between tags. Take "BMES" as an example; tag "B" can not be followed by "B" or "S". These two tagging schemas heavily rely on the precise prediction of the relative position of each character in one segment. However, the exact position information is not essential for the WS task. Any unreasonable inner prediction representing the character's relative position results in incorrect segmentation, although the correct boundary prediction. There is no limitation of tag-to-tag transition for the two-tags schema, but according to common sense, the first character of a sentence must be predicted as "SEP", "B" or "START". The implicit constraint of position for the first tag of the sentence still exists. It is necessary to ensure the prediction accuracy of the first tag during the inference. Therefore, CRF is required to revise unreasonable tag-to-tag transitions and learn the implicit restriction including the first tag of a sentence. The CRF has alleviated the unreasonable tag prediction to some degree, but the simultaneous learning of transition and emission matrix still results in the tag inference being intractable. Current works attempt to complicate the network (Chen et al., 2017; Tian et al., 2020) and introduce more information (Cai et al., 2017 ) such as rich context, linguistic and extra knowledge to tackle the abovementioned problem. However, the intrinsic problem, which is the implicit restriction of the position in the existing tagging schemas, is not well solved. In this paper, we propose "Connection(C)-No-Connection(NC)", which targets on characterto-character connections, to deal with the WS task directly. "C-NC" is independent of the previous state, and there is no dependency between states.Moreover, there is no restriction for the first state as it is located between the first and the secondary characters. It can be either "C" or "NC". "C" or "NC" is a binary classification task. Therefore, CRF is not required and can then be substituted with a classification network. The tag-transition and implicit restriction burdens can be substantially alleviated through such "C-NC" states. Because "C-NC" describes the connection state between two adjacent characters, we employ bigram features to cooperate with the "C-NC". Compared with existing tagging schemas, which are character-based and the bigram features are considered as extra information, the bigram features in SpIn are the basic processing unit. Therefore, a brand-new Separation Inference (SpIn) framework is proposed and constructed on the bigram features and the classification layer. Sliding one-after-one along all the bigrams, words are yielded by allocating "C" and "NC" tags in the interval of characters. SpIn significantly reduces the inference complexity (inference layer CRF is degraded as the softmax network); dispels extra context information (merely bigram feature is in consideration); and gains competitive performance of CWS on the machine learning in contrast with the deep learning models. Besides its effectiveness on Chinese Word Segmentation, our extensive experiments also verify the universality by attaining state-of-the-art (SOTA) performance in Japanese and Korean Word Segmentation benchmark tests. Our contributions are summarized as follows:• SpIn provides a new tagging schema from a novel perspective and solves the intrinsic problems of the existing tagging schemas.• SpIn is a universal framework that gains stateof-the-art performance on the Word Segmentation task in East Asian Languages.• The SpIn framework is also suitable for machine learning models and has achieved competitive results. | 0 |
Movable robots are ones that can execute tasks by moving around. If such robots can understand spoken language navigational instructions, they will become more useful and will be widely used. However, spoken language instructions are sometimes ambiguous in that their meanings differ depending on the situations such as robot and obstacle locations, so it is not always easy to make them understand spoken language instructions. Moreover, when they receive instructions while they are moving and they understand instructions only after they finish, accurate understanding is not easy since the situation may change during the instruction utterances.Although there have been several pieces of work on robots that receive linguistic navigational instructions (Marge and Rudnicky, 2010; Tellex et al., 2011) , they try to understand instructions before moving and they do not deal with instructions when situations dynamically change.We will demonstrate a 3D virtual robotic system that understands spoken language navigational in-structions in a situation-dependent way. It incrementally understands instructions so that it can understand them based on the situation at that point in time when the instructions are made. | 0 |
Collocation extraction typically proceeds by scoring collocation candidates with an association measure, where high scores are taken to indicate likely collocationhood. Two well-known such measures are pointwise mutual information (PMI) and mutual information (MI). In terms of observing a combination of words w 1 , w 2 , these are:EQUATIONEQUATIONPMI 1is the logged ratio of the observed bigramme probability and the expected bigramme probability under independence of the two words in the combination. MI (2) is the expected outcome of PMI, and measures how much information of the distribution of one word is contained in the distribution of the other. PMI was introduced into the collocation extraction field by Church and Hanks (1990) . Dunning (1993) proposed the use of the likelihoodratio test statistic, which is equivalent to MI up to a constant factor.Two aspects of (P)MI are worth highlighting. First, the observed occurrence probability p obs is compared to the expected occurrence probability p exp . Secondly, the independence assumption underlies the estimation of p exp .The first aspect is motivated by the observation that interesting combinations are often those that are unexpectedly frequent. For instance, the bigramme of the is uninteresting from a collocation extraction perspective, although it probably is amongst the most frequent bigrammes for any English corpus. However, we can expect to frequently observe the combination by mere chance, simply because its parts are so frequent. Looking at p obs and p exp together allows us to recognize these cases (Manning and Schütze (1999) and Evert (2007) for more discussion).The second aspect, the independence assumption in the estimation of p exp , is more problematic, however, even in the context of collocation extraction. As Evert (2007, p42) notes, the assumption of "independence is extremely unrealistic," because it ignores "a variety of syntactic, semantic and lexical restrictions." Consider an estimate for p exp (the the). Under independence, this estimate will be high, as the itself is very frequent. However, with our knowledge of English syntax, we would say p exp (the the) is low. The independence assumption leads to overestimated expectation and the the will need to be very frequent for it to show up as a likely collocation. A less contrived example of how the independence assumption might mislead collocation extraction is when bigramme distribution is influenced by compositional, non-collocational, semantic dependencies. Investigating adjective-noun combinations in a corpus, we might find that beige cloth gets a high PMI, whereas beige thought does not. This does not make the former a collocation or multiword unit. Rather, what we would measure is the tendency to use colours with visible things and not with abstract objects. Syntactic and semantic associations between words are real dependencies, but they need not be collocational in nature. Because of the independence assumption, PMI and MI measure these syntactic and semantic associations just as much as they measure collocational association. In this paper, we therefore experimentally investigate the use of a more informed p exp in the context of collocation extraction. | 0 |
Pretrained language models (PLMs) like BERT (Devlin et al., 2019) , GPT-2 (Radford et al., 2019) and RoBERTa (Liu et al., 2019) have emerged as universal tools that capture a diverse range of linguistic and -as more and more evidence suggests -factual knowledge (Petroni et al., 2019; Radford et al., 2019) .Recent work on knowledge captured by PLMs is focused on probing, a methodology that identifies the set of facts a PLM has command of. But little is understood about how this knowledge is acquired during pretraining and why. We analyze the ability of PLMs to acquire factual knowledge focusing on two mechanisms: reasoning and memorization. We pose the following two questions: a) Symbolic reasoning: Are PLMs able to infer knowledge not seen explicitly during pretraining? b) Memorization: Which factors result in successful memorization of a fact by PLMs? *We conduct our study by pretraining BERT from scratch on synthetic corpora. The corpora are composed of short knowledge-graph like facts: subjectrelation-object triples. To test whether BERT has learned a fact, we mask the object, thereby generating a cloze-style query, and then evaluate predictions.Symbolic reasoning. We create synthetic corpora to investigate six symbolic rules (equivalence, symmetry, inversion, composition, implication, negation) ; see Table 1 . For each rule, we create a corpus that contains facts from which the rule can be learned. We test BERT's ability to use the rule to infer unseen facts by holding out some facts in a test set. For example, for composition, BERT should infer, after having seen that leopards are faster than sheep and sheep are faster than snails, that leopards are faster than snails.Our setup is similar to link prediction in the knowledge base domain and therefore can be seen as a natural extension of the question: "Language models as knowledge bases?" (Petroni et al., 2019) . In the knowledge base domain, prior work (Sun et al., 2019; Zhang et al., 2020) has shown that models that are able to learn symbolic rules are superior to ones that are not. Talmor et al. (2019) also investigate symbolic reasoning in BERT using cloze-style queries. However, in their setup, there are two possible reasons for BERT having answered a cloze-style query correctly: (i) the underlying fact was correctly inferred or (ii) it was seen during training. In contrast, since we pretrain BERT from scratch, we have full control over the training setup and can distinguish cases (i) and (ii).A unique feature of our approach compared to prior work (Sinha et al., 2019; Weston et al., 2016; Clark et al., 2020 ) is that we do not gather all relevant facts and present them to the model at inference time. This is a crucial Rule Definition Example EQUI Equivalence (e, r, a) ⇐⇒ (e, s, a) (bird, can, fly) ⇐⇒ (bird, is able to, fly) SYM Symmetry (e, r, f ) ⇐⇒ (f, r, e) (barack, married, michelle) ⇐⇒ (michelle, married, barack) INV Inversion (e, r, f ) ⇐⇒ (f, s, e) (john, loves, soccer) ⇐⇒ (soccer, thrills, john) NEG Negation (e, r, a) ⇐⇒ (e, not r, b) (jupiter, is, big) ⇐⇒ (jupiter, is not, small) IMP Implication (e, r, a) ⇒ (e, s, b), (e, s, c),... (dog, is, mammal) ⇒ (dog, has, hair), (dog, has, neocortex), ... COMP Composition (e, r, f ) ∧ (f, s, g) ⇒ (e, t, g) (tiger, faster than, sheep) ∧ (sheep, faster than, snail) ⇒ (leopard, faster than, snail) with r = s = t Table 1 : The six symbolic rules we investigate (cf. (Nayyeri et al., 2019) ) with an example in natural language for entities e, f, g ∈ E, relations r, s, t ∈ R and attributes a, b, c ∈ A.difference -note that human inference similarly does not require that all relevant facts are explicitly repeated at inference time.We find that i) BERT is capable of learning some one-hop rules (equivalence and implication). ii) For others, even though high test precision suggests successful learning, the rules were not in fact learned correctly (symmetry, inversion and negation). iii) BERT struggles with two-hop rules (composition). However, by providing richer semantic context, even two-hop rules can be learned.Given that BERT can in principle learn some reasoning rules, the question arises whether it does so for standard training corpora. We find that BERTlarge has only partially learned the types of rules we investigate here. For example, BERT has some notion of "X shares borders with Y" being symmetric, but it fails to understand rules like symmetry in other cases.Memorization. During the course of pretraining, BERT sees more data than any human could read in a lifetime, an amount of knowledge that surpasses its storage capacity. We simulate this with a scaled-down version of BERT and a training set that ensures that BERT cannot memorize all facts in training. We identify two important factors that lead to successful memorization. (i) Frequency: Other things being equal, low-frequency facts are not learned whereas frequent facts are. (ii) Schema conformity: Facts that conform with the overall schema of their entities (e.g., "sparrows can fly" in a corpus with many similar facts about birds) are easier to memorize than exceptions (e.g., "penguins can dive").We publish our code for training and data generation. 11 https://github.com/BennoKrojer/ reasoning-over-facts | 0 |
Today, the interconnected nature of real-world applications brings more cross-field research problems leading to a much closer relationship between research areas. Real-world challenges require researchers to quickly get acquainted with knowledge in other areas. For example, imagine a researcher who is familiar with topic models wants to extend her research to opinion summarization. She would be more interested in finding out the current development of sentiment analysis and how topic models can be used in sentiment analysis, rather than the common background knowledge such as topic models and basic NLP technologies. Such a real-world demand encourages the study of multi-document comparative summarization for scientific papers in multiple subject areas. This paper presents the initial study on this problem.Comparative summarization aims at summarizing the differences among document groups (Wang et al., 2012) . The core is to compare different topics and find unique characteristics for each document group. The main motivation of this paper is to apply dTM to comparative summarization and to model the group-specific topics to capture the unique word usage for characterising documents in the same group. To our best knowledge, there is no previous study providing in-depth model analysis and detailed experimental results on dTM applied for comparative summarization.We first propose a probabilistic generative model dTM-Dirichlet to model the group-specific word distributions to capture the unique word usage for each document group. However, dTM-Dirichlet is not a truly differential topic model and it suffers from the problems of high inference cost, overparameterization and lack of sparsity. Evolving from the idea of SAGE (Eisenstein et al., 2011) , we develop dTM-SAGE to make the word probability distributions for each document group to share a common background word distribution and explicitly models how words are used differently in each group from the background word distribution.Our main contributions include the following two points: (1) we propose dTM to capture unique characteristics of each document group in the application background of comparative summarization for cross-area scientific papers; and (2) we propose two sentence scoring methods to measure the sen-tence discriminative capacity and a greedy sentence selection method to automatically generate summary for dTM-based comparative summarization. | 0 |
The focus in English parsing research in recent years has moved from Wall Street Journal parsing to improving performance on other domains. Our research aim is to improve parsing performance on text which is mildly ungrammatical, i.e. text which is well-formed enough to be understood by people yet which contains the kind of grammatical errors that are routinely produced by both native and nonnative speakers of a language. The intention is not to detect and correct the error, but rather to ignore it. Our approach is to introduce grammatical noise into WSJ sentences while retaining as much of the structure of the original trees as possible. These sentences and their associated trees are then used as training material for a statistical parser. It is important that parsing on grammatical sentences is not harmed and we introduce a parse-probability-based classifier which allows both grammatical and ungrammatical sentences to be accurately parsed. | 0 |
In recent years, RNN-based systems have proven excellent at a wide range of NLP tasks, sometimes achieving or even surpassing human performance on popular benchmarks. Their success stems from the complex but hard to interpret, representations that they learn from data. Given that syntax plays a critical role in human language competence, it is natural to ask whether part of what makes these models successful on language tasks is an ability to encode something akin to syntax.This question pertains to syntax "in the meaningful sense," that is, the latent, hierarchical, largely context-free phrase structure underpinning human language as opposed to superficial or shallow issues of word order (Chomsky, 1957; Marcus, 1984; Everaert et al., 2015; Linzen et al., 2016) . Clearly, syntactic information can be explicitly incorporated into neural systems to great effect (e.g., Dyer et al., 2016; Swayamdipta et al., 2018) . Less certain is whether such systems induce something akin to hierarchical structure (henceforth, "syntax") on their own when not explicitly taught to do so.Uncovering what an RNN actually represents is notoriously difficult, and several methods for probing RNNs' linguistic representations have been developed to approach the problem. Most directly, one can extract finite automata (e.g., Weiss et al., 2017) from the network or measure its state as it processes inputs to determine which neurons attend to what features (e.g., Shi et al., 2016; Linzen et al., 2016; Tenney et al., 2019) . Alternatively, one can present a task which only a syntactic model should be able to solve, such as grammaticality discrimination or an agreement task, and then infer if a model has syntactic representations based on its behavior (Linzen et al., 2016; Ettinger et al., 2018; Gulordava et al., 2018; Warstadt et al., 2019) .In practice, simple sentences far outnumber the ones that require syntax in any natural corpus, which may obscure evaluation (Linzen et al., 2016) . One way around this, referred to here as templatebased probing, is to either automatically generate sentences with a particular structure or extract just the relevant ones from a much larger corpus. Templates have been used in a wide range of studies, including grammaticality prediction (e.g., Warstadt et al., 2019) , long-distance dependency resolution, and agreement prediction tasks (e.g., Gulordava et al., 2018) . By focusing on just relevant structures that match a given template rather than the gamut of naturally occurring sentence, templatebased probing offers a controlled setting for evaluating specific aspects of a model's representation.The crux of behavioral evaluation is the assertion that the chosen task effectively distinguishes between a model that forms syntactic representations and one which does not. This must be demonstrated for each task -if a model that does not capture syntax can pass the evaluation, then there is no conclusion to be drawn. However, this step is often omitted (but not always, e.g., Gulordava et al., 2018; Warstadt et al., 2019) . Moreover, templatebased generation removes the natural sparse and diverse distribution of sentence types, increasing the chance that a system might pick up on nonsyntactic patterns in the data, further increasing the importance of a clear baseline.This problem is most clearly illustrated with an example. In the following sections, we introduce Prasad et al.'s (2019) novel psycholinguisticsinspired template-based probe of relative clause types, which was taken as evidence in support of syntactic representation in LSTMs. We then pass PvSL's test with two non-syntactic baselines: an n-gram LM which can only capture short-distance word order of concrete types (Section 3), and an LSTM trained on scrambled inputs (Section 4). These baselines show that a combination of collocation and lexical representation can account for PvSL's results, which highlights a critical flaw in that experimental design. Following that, we argue that it is unlikely that LSTMs induce syntactic representations given current evidence and suggest an alternative angle for the question (Section 5). | 0 |
Open-domain question answering (QA) is the task of answering arbitrary factoid questions based on a knowledge source (e.g., Wikipedia). Recent stateof-the-art QA models are typically based on a twostage retriever-reader approach (Chen et al., 2017) using a retriever that obtains a small number of relevant passages from a large knowledge source and a reader that processes these passages to produce an answer. Most recent successful retrievers encode questions and passages into a common continuous embedding space using two independent encoders Karpukhin et al., 2020; Guu et al., 2020) . Relevant passages are retrieved using a nearest neighbor search on the index con- Figure 1 : Architecture of BPR, a BERT-based model generating compact binary codes for questions and passages. The passages are retrieved in two stages: (1) efficient candidate generation based on the Hamming distance using the binary code of the question and (2) accurate reranking based on the inner product using the continuous embedding of the question.taining the passage embeddings with a question embedding as a query. These retrievers often outperform classical methods (e.g., BM25), but they incur a large memory cost due to the massive size of their passage index, which must be stored entirely in memory at runtime. For example, the index of a common knowledge source (e.g., Wikipedia) requires dozens of gigabytes. 1 A reduction in the index size is critical for real-world QA that requires large knowledge sources such as scientific databases (e.g., PubMed) and web-scale corpora (e.g., Common Crawl).In this paper, we introduce Binary Passage Retriever (BPR), which learns to hash continuous vectors into compact binary codes using a multitask objective that simultaneously trains the encoders and hash functions in an end-to-end manner (see Figure 1 ). In particular, BPR integrates our learning-to-hash technique into the state-ofthe-art Dense Passage Retriever (DPR) (Karpukhin et al., 2020) to drastically reduce the size of the passage index by storing it as binary codes. BPR computes binary codes by applying the sign function to continuous vectors. As the sign function is not compatible with back-propagation, we approximate it using the scaled tanh function during training. To improve search-time efficiency while maintaining accuracy, BPR is trained to obtain both binary codes and continuous embeddings for questions with multi-task learning over two tasks: (1) candidate generation based on the Hamming distance using the binary code of the question and (2) reranking based on the inner product using the continuous embedding of the question. The former task aims to detect a small number of candidate passages efficiently from the entire passages and the latter aims to rerank the candidate passages accurately.We conduct experiments using the Natural Questions (NQ) (Kwiatkowski et al., 2019) and Triv-iaQA (TQA) (Joshi et al., 2017) datasets. Compared with DPR, our BPR achieves similar QA accuracy and competitive retrieval performance with a substantially reduced memory cost from 65GB to 2GB. Furthermore, using an improved reader, we achieve results that are competitive with those of the current state of the art in open-domain QA. Our code and trained models are available at https://github.com/studio-ousia/bpr. | 0 |
Established methods for the identification of word translations are based on parallel (Brown et al., 1990) or comparable corpora (Fung & McKeown, 1997; Fung & Yee, 1998; Rapp, 1995; Rapp 1999; Chiao et al., 2004) . The work using parallel corpora such as Europarl (Koehn, 2005; Armstrong et al., 1998) or JRC Acquis (Steinberger et al., 2006) typically performs a length-based sentence alignment of the translated texts, and then tries to conduct a word alignment within sentence pairs by determining word correspondences that get support from as many sentence pairs as possible. This approach works very well and can easily be put into practice using a number of freely available open source tools such as Moses (Koehn et al., 2007) and Giza++ (Och & Ney, 2003) .However, parallel texts are a scarce resource for many language pairs (Rapp & Martín Vide, 2007) , which is why methods based on comparable corpora have come into focus. One approach is to extract parallel sentences from comparable corpora (Munteanu & Marcu, 2005; Wu & Fung, 2005) . Another approach relates co-occurrence patterns between languages. Hereby the underlying assumption is that across languages there is a correlation between the cooccurrences of words which are translations of each other. If, for example, in a text of one language two words A and B co-occur more often than expected by chance, then in a text of another language those words which are the translations of A and B should also co-occur more frequently than expected.However, to exploit this observation some bridge needs to be built between the two languages. This can be done via a basic dictionary comprising some essential vocabulary. To put it simply, this kind of dictionary allows a (partial) word-by-word translation from the source to the target language, 1 so that the result can be considered as a pair of monolingual corpora. Deal-ing only with monolingual corpora means that the established methodology for computing similar words (see e.g. Pantel & Lin, 2002) , which is based on Harris' (1954) distributional hypothesis, can be applied. It turns out that the most similar words between the two corpora effectively identify the translations of words.This approach based on comparable corpora considerably relieves the data acquisition bottleneck, but has the disadvantage that the results tend to lack accuracy in practice.As an alternative, there is also the approach of identifying orthographically similar words (Koehn & Knight, 2002) which has the advantage that it does not even require a corpus. A simple word list will suffice. However, this approach works only for closely related languages, and has limited potential otherwise.We propose here to generate dictionaries on the basis of foreign word occurrences in texts. As far as we know, this is a method which has not been tried before. When doing so, a single monolingual corpus can be used for all source languages for which it contains a sufficient number of foreign words. A constraint is that the target language must always be the language of the monolingual corpus, 2 which therefore all dictionaries have in common. | 0 |
Recent years have seen the rapid growth of social media platforms (e.g., Facebook, Twitter, several blogs) that has changed the way that people communicate. Many people express their opinion and emotions on blogs, forums or microblogs. Detecting the emotions that are expressed in social media is a very important problem for a wide variety of applications. For example, enterprises can detect complains of customers about their products or services and act promptly.Emotion detection aims at identifying various emotions from text. According to (Plutchik, 1980) there are eight basic emotions: anger, joy, sadness, fear, trust, surprise, disgust and anticipation. Considering the abundance of opinions and emotions expressed in microblogs, emotion and sentiment analysis in Twitter has attracted the interest of the research community (Giachanou and Crestani, 2016) . In particular, Implicit Emotion Shared Task (IEST) is a shared task by WASSA 2018 that focuses on emotion analysis. In this task, participants are asked to develop tools that can predict the emotions in tweets from which a certain emotion word is removed. This is a very challenging problem since the emotion analysis needs to be done without access to an explicit mention of an emotion word and consequently taking advantage of context that surrounds the target word.In this paper, we describe our submitted system to the IEST: WASSA-2018 Implicit Emotion Shared Task. Our system is based on a bidirectional Long Short-Term Memory (biLSTM) network on top of word embeddings which is later inserted in a pseudo-relevance feedback schema. Our results show that even though the model still need more refinement it offers interesting capabilities to address the task at hand. | 0 |
Taiwan has become the first in Asia to legalize same-sex marriage. Nevertheless, intense debate on whether same-sex marriage should be legalized between the pro-and against-same-sex marriage groups has not lessened. In the debate, discourses around homosexual has been reproduced and circulated, many of which concern the consequence of same-sex marriage, such as homosexual education, civil law of adopting child and homosexual parenting. This paper aims to address the question of the conflict between opposing stances at the level of lexical semantics. I will use 'homosexual' as a keyword and examine how the word is used in each stance on the issue with help of distributional corpus analysis. However, this issue is complicated by the fact that there are two synonymous words in Mandarin Chinese, that is tóngxìngliàn and tóngzhì, both of which are used frequently and can be used under similar register and genre. This paper addresses two research questions:1. From their collocational behavior, what are the semantic difference between tóngzhì and tóngxìngliàn?2. What are the characteristic usages of these two words in the opposing stances? What is the relationship between the stances and choices of words?Below I will first introduce these two words and discuss in what sense they are near synonymous.tóngxìngliàn and tóngzhìIn English world, the word homosexual, according to Tamagne (2004) , came into existence at the end of the 19th century and was allegedly first used by the Hungarian journalist Karoly Maria Kertbeny in 1869. Before the introduction of the homosexual-heterosexual distinction, "being homosexual was not seen as a quality of the individual but as a quality of a single act, which was equated with sodomy" (Tamagne, 2004: 7) . Concerning the word denoting homosexual in Chinese, the word tóngxìngliàn came earlier than tóngzhì. tóngxìngliàn came from Japanese 同 性 愛 douseiai, which was the translation of the English word homosexual around 1900 (Sang, 2014: 111-4) . Other morphological related lexical items are yìxìngliàn 'heterosexual' and shuāngxìngliàn 'bisexual'.On the other hand, tóngzhì originally means 'comrade'. This sense of tóngzhì is still in use, especially in the context of political parties. It is then intriguing to answer the question that how the word with political connotation obtained its meaning as the label for homosexual at that time. One commonly accepted answer is that, the meaning of homosexual was first given to tóngzhì by Mai-Ke Lin (林邁克) and then carried over through the influence of Hong kong directress Yi-Hua Lin (林奕華) (Ji, 2015) . This standardized answer is not without problem. For example, Ji (2015) gave a more detailed historical view on how the word tóngzhì was appropriated and adapted into Taiwan society.Synonymy is one of paradigmatic semantic relations of two lexical items whose similarity in meaning is more striking than their difference. The issue of synonym concerns lexical choice (Glynn, 2010) , meaning that given two words with synonymous meaning, our task as linguists is to ask what features determine not one word but the other one should be used in a given context. At the first glance, the senses of the two words seem not to be synonymous but only similar. Tóngxìngliàn denotes a sexual orientation, whereas tóngzhì refers to the person who has such sexual orientation. If we use the classification of synonymy given by Cruse (2010: 142-5) , synonym is said to have three sub-types: absolute-synonym, propositional-synonym and near-synonym. The absolute-synonym is primarily used as a theoretical endpoint whose exis-tence in real language is rare, because difference in form implies difference in meaning. Therefore tóngzhì and tóngxìngliàn are not of this type. Second, two words are said to be propositionalsynonym if the truth value remains the same when they are used a sentence. The examples below show that tóngxìngliàn and tóngxìngliàn are not of propositional-synonym.同性戀/* 同志在以前被視作一種心理疾病︒ tóngxìngliàn/*tóngzhì zàiyǐqián bèi shìzuò yīzhǒng xīnlǐjíbìng 'Homosexual/Gay is used to be considered as mental illness.'The abnormality of using tóngzhì in this sentence lies in the fact that it does not denote the concept of a kind of sexual orientation as tóngxìngliàn but the person who has such sexual orientation, which suggests that these two words are far from being synonymous. Just as what Cruse has claimed, "The borderline between propositional synonymy and near synonymy is at least in principle clear ... (while) the borderline between near synonymy and non-synonymy is much less straightforward." Although tóngzhì and tóngxìngliàn are more closed to the side of non-synonym at the first glance, I am now going to argue that they are near synonym in a specific context, that is, when they are used as a compound modifier in N-N compounds such as tóngzhì/tóngxìngliàn-hūnyīn 'homosexual marriage', tóngzhì/tóngxìngliàn-yùndòng 'homosexual movement', tóngzhì/tóngxìngliàn-jiātíng 'homosexual family' and tóngxìngliàn/tóngzhìyìtí 'homosexual issue'. In these compounds, the difference of homosexual as people and homosexual as a sexual orientation between tóngzhì and tóngxìngliàn no longer exists. | 0 |
Perception of foreign accent is mainly due to a difference of pronunciation between what a speaker said and a norm shared by natives of a target language (Alazard, 2013) . This difference has been mostly described on the segmental level, through theories of language acquisition, but the role of prosody was proved only recently in accent perception studies (De Meo et al., 2012; Pellegrino, 2012) . Today, it is clear that both segmental and suprasegmental levels are important in perception of this difference by native speakers. Among prosodic parameters, rhythm is one that varies noticeably from one language to another. (Gibbon and Gut, 2001 ) define rhythm as the recurrence of patterns of weak and strong elements in time. Those elements can be short or long syllables, low or high intonation on vowel or consonant segments. (Di Cristo and Hirst, ) define it as the temporal organisation of prominences. (Arvaniti, 2009) talks about perception of repetition and similarity patterns. (Alazard, 2013) refers to physical and psychobiological phenomena like dance and music rhythm, or cardiac rhythm. All these definitions put forward the idea of timing patterns; length, height or intensity patterns in time. Many studies try to classify languages based exclusively on rhythmic parameters. Most of them rely on vowels and consonant duration, and therefore need an aligned transcription. Furthermore, some recent studies showed similar results with voicing and unvoicing duration or syllables duration (Fourcin and Dellwo, 2013; Dellwo and Fourcin, 2013; Dellwo et al., 2015) , parameters that may be automatically detected from signal without any transcription. This is great news since automatic transcription might be an issue with non-native speech, and manual transcription would be too long and costly.Most studies also show limits of their results due to small amount of speakers or elicitation bias (Fourcin and Dellwo, 2013; White and Mattys, 2007b; Ramus et al., 1999; Gibbon and Gut, 2001; Grabe and Low, 2002) . Hence it is necessary to use a large corpus with the largest variation possible, with a lot of spontaneous speech, since rhythm might strongly vary depending on situations and speakers, and even more in spontaneous conversations (Bhat et al., 2010) . It is still today extremely important to be able to measure language in its variety, without focusing on very specific conditions (Astésano, 2016) .In this study we tried to model rhythm of French through the recently published Corpus d'Étude pour le Français Contemporain (CEFC, Benzitoun et al. (2016) ), which offers a wide variety of French. In order to model this variety as much as possible, we trained a gaussian mixture model on several acoustic parameters, and evaluated the model with test native speakers, and non-native from CEFC and an other corpus. Acoustic parameters are all based on syllabic nuclei detected by a Praat script from De Jong and Wempe (2009) , and voicing detection tool from Praat (Boersma and Weenink, 2019) . Meanwhile, we also evaluated the efficiency of each parameter to distinguish native and non-native depending on the corpus.Our concern is to determine which rhythmic parameters make the difference -and are mostly responsi-ble for the foreign accent -depending on the mother tongue of the speaker. Once these parameters are known, we will be able to offer more appropriate remedial measures to learners. | 0 |
The current state-of-the-art in machine translation uses a phrase-based approach to translate individual sentences from the source language to the target language. This technique [1] gave significant improvement over word to word translation originally developed at IBM [2] . However, regardless of the underlying models, the search or decoding process in statistical machine translation is computationally demanding, particularly with regard to word or phrase reordering. Effective pruning and novel reordering techniques have helped reduce the search space to something more manageable, but search still remains a difficult problem in statistical machine translation.Our goal was to create a fast and lightweight decoder that could be used for our research system as well as for a realtime speech-to-speech translation demonstration system. For the research system, the decoder had to support two-pass decoding via n-best list re-ranking or lattice rescoring. The decoder also had to be memory efficient enough to handle search with very large numbers of active nodes. For the real-time demonstration system, we required fast decoding as well as a common API that would allow integration with various other components such as speech recognition and textto-speech systems. Real-time decoding also requires that the translation models, which can approach several gigabytes in size, be read efficiently from disk on demand as opposed to the pre-filtering of models typically done during batch processing. | 0 |
Translation dictionaries used in multilingual natural language processing such as machine translation have been made manually, but a great deal of labor is required for this work and it is difficult to keep the description of the dictionaries consistent. Therefore, researches of extracting translation pairs from parallel corpora automatically become active recently (Gale and Church, 1991; Kaji and Aizono, 1996; Tanaka and Iwasaki, 1996; Kitamura and Matsumoto, 1996; Fung, 1997; Melamed, 1997; Sato and Nakanishi, 1998) .This paper proposes a learning and extracting method of bilingual word sequence correspondences from non-aligned parallel corpora with Support Vector Machines (SVMs) (Vapnik, 1999) . SVMs are ones of large margin classifiers (Smola et al., 2000) which are based on the strategy where margins between separating boundary and vectors of which elements express the features of training samples is maximized. Therefore, SVMs have higer ability of the generalization than other learning models such as the decision trees and rarely cause over-fit for training samples. In addition, by using kernel functions, they can learn non-linear separating boundary and dependencies between the features. Therefore, SVMs have been recently used for the natural language processing such as text categorization (Joachims, 1998; Taira and Haruno, 1999) , chunk identification (Kudo and Matsumoto, 2000b) , dependency structure analysis (Kudo and Matsumoto, 2000a) .The method proposed in this paper does not require aligned parallel corpora which do not exist too many at present. Therefore, without limiting applicable domains, word sequence correspondences can been extracted. | 0 |
Classification systems, from simple logistic regression to complex neural network, typically predict posterior probabilities over classes and decide the final class with the maximum probability. The model's performance is then evaluated by how accurate the predicted classes are with respect to outof-sample, ground-truth labels. In some cases, however, the quality of posterior estimates themselves must be carefully considered as such estimates are often interpreted as a measure of confidence in the final prediction. For instance, a well-predicted posterior can help assess the fairness of a recidivism prediction instrument (Chouldechova, 2017) or select the optimal number of labels in a diagnosis code prediction (Kavuluru et al., 2015) . Guo et al. (2017) showed that a model with high classification accuracy does not guarantee good posterior estimation quality. In order to correct the poorly calibrated posterior probability, existing calibration methods (Zadrozny and Elkan, 2001; Platt et al., 1999; Guo et al., 2017; Kumar et al., 2019) generally rescale the posterior distribution predicted from the classifier after training. Such post-processing calibration methods re-learn an appropriate distribution from a held-out validation set and then apply it to an unseen test set, causing a severe discrepancy in distributions across the data splits. The fixed split of the data sets makes the post-calibration very limited and static with respect to the classifier's performance.We propose a simple but effective training technique called Posterior Calibrated (PosCal) training that optimizes the task objective while calibrating the posterior distribution in training. Unlike the post-processing calibration methods, PosCal directly penalizes the difference between the predicted and the true (empirical) posterior probabilities dynamically over the training steps.PosCal is not a simple substitute of the postprocessing calibration methods. Our experiment shows that PosCal can not only reduce the calibration error but also increase the task performance on the classification benchmarks: compared to the baseline MLE (maximum likelihood estimation) training method, PosCal achieves 2.5% performance improvements on GLUE (Wang et al., 2018) and 0.5% on xSLUE (Kang and Hovy, 2019) , and at the same time 16.1% posterior error reduction on GLUE and 13.2% on xSLUE. | 0 |
International integration, along with the exponential growth of the Internet, has led to an increase in plagiarism, imitation of celebrities' writing style, and copyright disputes.Due to the enormous amount of information, looking for the style and characteristics of written works in order to identify the author's style is a huge challenge. Globally, there have been numerous studies which find out models to identify the author's style in many languages. However, there are very few studies in natural language processing applying writing style in Vietnamese to attribute authorship.Stylometry, beginning with attempts to settle authorship disputes, was first developed by Augustus De Morgan in 1851 based on word length. By the late 1880s, Thomas C. Mendenhall had analyzed the word length distribution for works written by Bacon, Marlowe, and Shakespeare to determine the true author of plays supposedly written by Shakespeare. In 1932, George Kingsley Zipf discovered the connection between ranking and the frequency of words, later stated in Zipf's law. In 1944, George Yule created a way to measure frequency of words, used to analyze vocabulary richness, namely Yule's characteristic. In the early 1960s, most research papers refer to Mosteller and Wallace's works on the Federalist Papers, which was considered as a basis of using computation in stylometry. In the next several decades, with the increasing number of digital texts, as well as the growth of the Internet, machine learning techniques, and neural networks, accessing information led to the development of natural language processing tools. Semantics continued to grow in the 21st century, and due to the overwhelming amount of information, copying texts also became more popular, leading to the growth of stylometry which is used in plagiarism detection, author identitication, author profiling, etc.In this paper, we use a corpus of Vietnamese online texts to attribute authorship using the following measures: Mendenhall's characteristic curves, Kilgariff's Chi-Squared, John Burrows's Delta measure.Adversarial Stylometry: When translated, a piece of writing has its style imitated, and going through many translators makes its characteristics less distinct. These changes make detecting the original style more difficult.Detecting stylistic similarities includes the following tasks:Stylochronometry: In time, an author may change his/her writing style due to changes in vocabulary, lifestyle, environment, age, etc. Studies have sharp distinction because they depend on a language in a specific time period and on a particular author. Author Profiling: extracting the characteristics of a text to gain information about an author such as gender, age, region, time of writing. Authorship Verification: Based on characteristics readily available in the training data, determining whether two texts were written by the same author. Authorship Attribution: an individual or group of authors has characteristic styles that are developed subconsciously. Based on these distinctions, we will identify the true author(s) of texts in a corpus. | 0 |
In this paper, we describe a new algorithm for recovering WH-trace empty nodes in gold parse trees in the Penn Treebank and, more importantly, in automatically generated parses. This problem has only been investigated by a handful of researchers and yet it is important for a variety of applications, e.g., mapping parse trees to logical representations and structured representations for language modeling. For example, SuperARV language models (LMs) (Wang and Harper, 2002; Wang et al., 2003) , which tightly integrate lexical features and syntactic constraints, have been found to significantly reduce word error in English speech recognition tasks. In order to generate SuperARV LM training, a state-ofthe-art parser is used to parse training material and then a rule-based transformer converts the parses to the SuperARV representation. The transformer is quite accurate when operating on treebank parses; however, trees produced by the parser lack one important type of information -gaps, particularly WHtraces, which are important for more accurate extraction of the SuperARVs.Approaches applied to the problem of empty node recovery fall into three categories. Dienes and Dubey (2003) recover empty nodes as a preprocessing step and pass strings with gaps to their parser. Their performance was comparable to (Johnson, 2002) ; however, they did not evaluate the impact of the gaps on parser performance. Collins (1999) directly incorporated wh-traces into his Model 3 parser, but he did not evaluate gap insertion accuracy directly. Most of the research belongs to the third category, i.e., post-processing of parser output. Johnson (2002) used corpus-induced patterns to insert gaps into both gold standard trees and parser output. Campbell (2004) developed a set of linguistically motivated hand-written rules for gap insertion. Machine learning methods were employed by (Higgins, 2003; Levy and Manning, 2004; Gabbard et al., 2006) .In this paper, we develop a probabilistic model that uses a set of patterns and tree matching to guide the insertion of WH-traces. We only insert traces of non-null WH-phrases, as they are most relevant for our goals. Our effort differs from the previous approaches in that we have developed an algorithm for the insertion of gaps that combines a small set of expressive patterns with a probabilistic grammar-based model. | 0 |
Computational literary analysis works at the intersection of natural language processing and literary studies, drawing on the structured representation of text to answer literary questions about character (Underwood et al., 2018) , objects (Tenen, 2018) and place (Evans and Wilkens, 2018) .Much of this work relies on the ability to extract entities accurately, including work focused on modeling (Bamman et al., 2014; Iyyer et al., 2016; Chaturvedi et al., 2017) . And yet, with notable exceptions (Vala et al., 2015; Brooke et al., 2016) , nearly all of this work tends to use NER models that have been trained on non-literary data, for the simple reason that labeled data exists for domains like news through standard datasets like ACE (Walker et al., 2006) , CoNLL (Tjong Kim Sang and De Meulder, 2003) and OntoNotes (Hovy et al., 2006) -and even historical non-fiction (De-Lozier et al., 2016; Rayson et al., 2017) -but not for literary texts. This is naturally problematic for several reasons: models trained on out-of-domain data surely degrade in performance when applied to a very different domain, and especially for NER, as Augenstein et al. (2017) has shown; and without indomain test data, it is difficult to directly estimate the severity of this degradation. At the same time, literary texts also demand slightly different representations of entities. While classic NER models typically presume a flat entity structure (Finkel and Manning, 2009) , relevant characters and places (and other entities) in literature need not be flat, and need not be named: The cook's sister ate lunch contains two PER entities ([The cook] and [The cook's sister]).We present in this work a new dataset of entity annotations for a wide sample of 210,532 tokens from 100 literary texts to help address these issues and help advance computational work on literature. These annotations follow the guidelines set forth by the ACE 2005 entity tagging task (LDC, 2005) in labeling all nominal entities (named and common alike), including those with nested structure. In evaluating the stylistic difference between the texts in ACE 2005 (primarily news) and the literary texts in our new dataset, we find considerably more attention dedicated to people and settings in literature; this attention directly translates into substantially improved accuracies for those classes when models are trained on them. The dataset is freely available for download under a Creative Commons ShareAlike 4.0 license at https://github.com/dbamman/ litbank. | 0 |
Translating natural language (NL) descriptions into executable programs is a fundamental problem for computational linguistics. An end user may have difficulty to write programs for a certain task, even when the task is already specified in NL. For some tasks, even for developers, who have experience in writing programs, it can be time consuming and error prone to write programs based on the NL description of the task. Naturally, automatically synthesizing programs from NL can help alleviate the preceding issues for both end users and developers.Recent research proposes syntax-based approaches to address some tasks of this problem in different domains, such as regular expressions (regex) (Locascio et al., 2016) , Bash scripts (Lin et al., 2017) , and Python programs (Yin and Neubig, 2017) . These approaches typically train a sequence-to-sequence learning model using maximum likelihood estimation (MLE). Using MLE encourages the model to output programs that are syntactically similar with the ground-truth programs in the training set. However, such syntaxbased training objective deviates from the goal of synthesizing semantically equivalent programs. Specifically, these syntax-based approaches fail to handle the problem of Program Aliasing (Bunel et al., 2018) , i.e., a semantically equivalent program may have many syntactically different forms. Table 1 shows some examples of the Program Aliasing problem. Both Program 1 and Program 2 are desirable outputs for the given NL specification but one of them is penalized by syntax-based approaches if the other one is used as the ground truth, compromising the overall effectiveness of these approaches.In this paper, we focus on generating regular expressions from NL, an important task of the program-synthesis problem, and propose Sem-Regex, a semantics-based approach to generate regular expressions from NL specifications. Regular expressions are widely used in various applications, and "regex" is one of the most common tags in Stack Overflow 1 with more than 190, 000 related questions. The huge number of regex-related questions indicates the importance of this task.Different from the existing syntax-based approaches, SemRegex alters the syntax-based training objective of the model to a semantics-based objective. To encourage the translation model to generate semantically correct regular expressions, instead of MLE, SemRegex trains the model by maximizing the expected semantic correctness of generated regular expressions. We follow the technique of policy gradient (Williams, 1992) to estimate the gradients of the semantics-based objective and perform optimization. The measurement of semantic correctness serves as a key part in the semantics-based objective, which should represent the semantics of programs. In this paper, we convert a regular expression to a minimal Deterministic Finite Automaton (DFA). Such conversion is based on the insight that semantically equivalent regular expressions have the same minimal DFAs. We define the semantic correctness of a generated regular expression as whether its corresponding minimal DFA is the same as the ground truth's minimal DFA.When our approach is applied on domains other than regular expressions such as Python programs and Bash scripts, a perfect equivalence oracle such as minimal DFAs may not be feasibly available. To handle a more general case, we propose correctness assessment based on test cases for regular expression; such correctness assessment can be easily generalized for other tasks of program synthesis. Concretely, we generate test cases to represent semantics of the ground truth. For a generated regular expression, we assess its semantic correctness by checking whether it can pass all the test cases. However, a regular expression may have infinite positive (i.e., matched) or negative (i.e., unmatched) string examples; thus, we cannot perfectly represent the semantics. To use limited string examples to differentiate whether a generated regular expression is semantically correct or not, we propose an intelligent strategy for test generation to generate distinguishing test cases instead of just using random test cases.We evaluate SemRegex on three public datasets: NL-RX-Synth, NL-RX-Turk (Locascio et al., 2016) , and KB13 (Kushman and Barzilay, 2013) . We compare SemRegex with the existing state-ofthe-art approaches on the task of generating regular expressions from NL specifications. Our evaluation results show that SemRegex outperforms the start-of-the-art approaches on all of three datasets. The evaluation results confirm that by maximizing semantic correctness, the model can output more correct regular expressions even when the regular expressions are syntactically different from the ground truth.In summary, this paper makes the following three main contributions.(1) We propose a semantics-based approach to optimize the semantics-based objective for the task of generating regular expressions from NL specifications.(2) We introduce the measurement of semantic correctness based on test cases, and propose a strategy to generate distinguishing test cases, in order to better measure the semantic correctness than using random test cases. 3We evaluate our approach on three public datasets. The evaluation results show that our approach outperforms the existing state-of-the-art on all of the three datasets. | 0 |
One of the key aspects of a functional, free society is being presented with comprehensive options in electing government representatives. The decision is aided by the positions politicians take on relevant issues like water, housing, etc. Hence, it becomes important to relay political standings to the general public in a comprehensible manner. The Hansard transcripts of speeches delivered in the UK Parliament are one such source of information. However, owing to the voluminous quantity, * Indicates equal contribution. esoteric language and opaque procedural jargon of Parliament, it is tougher for the non-expert citizen to assess the standings of their elected representative. Therefore, conducting stance classification studies on such data is a challenging task with potential benefits. However, the documents tend to be highly tedious and difficult to comprehend, and thus become a barrier to information about political issues and leanings.Sentiment analysis of data from various relevant sources (social media, newspapers, transcripts, etc.) has often given several insights about public opinion, issues of contention, general trends and so on (Carvalho et al., 2011; Loukis et al., 2014) . Such techniques have even been used for purposes like predicting election outcomes and the reception of newly-launched products. Since these insights have wide-ranging consequences, it becomes imperative to develop rigorous standards and state-of-the-art techniques for them. One aspect that helps with analyzing such patterns and sentiments is studying about the interconnections and networks underlying such data. Homophily, or the tendency of people to associate with like-minded individuals, is the fundamental aspect of depicting relationships between users of a social network (for instance). Constructing graphs to map such complex relationships and attributes in data could help one arrive at ready insights and conclusions. This comes particularly useful when studying parliamentary debates and sessions; connecting speakers according to factors like party or position affiliations pro-vides information on how a speaker is likely to respond to an issue being presented. Attempts to analyze social media data based on such approaches have been made (Deitrick and Hu, 2013) . | 0 |
Relation Extraction (RE) (Bach and Badaskar, 2007) is a fundamental task of Natural Language Processing (NLP), which aims to extract the relations between entities in sentences and can be Figure 1 : The figure is an intuitive illustration of the difference in ways to introduce relation information between most existing works and our proposed approach. The orange vector and the blue vector denote representations of relations and prototypes, respectively. applied to other advanced tasks Hu et al., 2021) . However, RE usually suffers from labeling difficulties and train data scarcity due to the massive cost of labour and time. In order to solve the problem of data scarcity, Few-Shot Relation Extraction (FSRE) (Han et al., 2018; Gao et al., 2019a; Qu et al., 2020; Yang et al., 2021) task has become a research hotspot in academia in recent years. The task is firstly to train on large-scale data on existing relation types, and then quickly migrate to a small amount of data on new relation types.Inspired by the success of few-shot learning in the computer vision (CV) community (Sung et al., 2018; Garcia and Bruna, 2018) , various methods are introduced into FSRE. One of the popular algorithms is the Prototype Network (Snell et al., 2017) , which is based on the meta-learning framework (Vilalta and Drissi, 2002; Vanschoren, 2018) . In detail, collections of few-shot tasks sampled from the external data containing disjoint relations are used as the training set for the model optimization. For each few-shot task, the center of each relation class is calculated and used as the prototype of the relation class. Then, the model can be optimized by reducing the distances between the query samples and their corresponding prototypes. Given a new sample, the model calculates which of the class prototypes is nearest to the new sample and assign it to this relation class.In order to get better results, many works have utilized relation information (i.e., relation labels or descriptions) to assist model learning. TD-proto (Yang et al., 2020) enhanced prototypical network with both relation and entity descriptions. CTEG (Wang et al., 2020) proposed a model that learns to decouple high co-occurrence relations, where two types of external information are added. Another intuitive idea is to hope that the model can learn good prototypes or representations , that is, to reduce the distances of the intra-class while widening the ones among different classes (Han et al., 2021; Dong et al., 2021) , where Han et al. (2021) introduced a novel approach based on supervised contrastive learning that learns better prototype representations by the utilization of prototypes and relation labels and descriptions during the model training; Dong et al. (2021) considered a semantic mapping framework, MapRE, which leverages both label-agnostic and label-aware knowledge in pre-training and fine-tuning processes.However, there are two limitations in how these works introduce relation information. The first is that most of them take implicit constraints, like contrastive learning or relation graphs, instead of the direct fusion, which can be weak facing the remote samples. The second is that they usually adopt complicated designs or networks, like hybrid features or elaborate attention networks, which can bring too many or even harmful parameters. Therefore, in this paper, we propose a straightforward yet effective way to incorporate the relation information into the model. Specifically, on the one hand, the same encoder is used to encode relation information and sentences for mapping them into the same semantic space. On the other hand, we generate the relation representation for each relation class by concatenating two relation views (i.e., [CLS] token embedding and the mean value of embeddings of all tokens), which allows relation representations and prototypes to form the same dimension. Afterwards, the generated relation representation is directly added to the prototype for enhancing model train and prediction. Figure 1 shows an intuitive illustration of the difference in ways to introduce relation information between most existing works and our proposed approach. Based on the mentioned two limitations of previous works, we provide two possible high-level ideas about why our proposed approach should work for few-shot relation extraction. The first is that the direct addition is a more robust Figure 2 : The overall structure of our proposed approach, in which the sentence and the relation information share the same encoder, and then the relation representation is generated through operation with two views of relations and added to the original prototype representation.and denote the concatenation and addition operations, respectively. way to generate promising prototypes than implicit constraints when facing the remote samples. The second is that the direct addition does not bring extra parameters and simplifies the model. Due to possible over-fitting, fewer parameters are always better than more parameters, especially for fewshot tasks. We conduct experimental analyses in the experiment section for further demonstration.We conduct experiments on the popular FSRE benchmark FewRel 1.0 (Han et al., 2018) under four few-shot settings. Experimental results show considerable improvements and achieve comparable results to the state-of-the-art, which demonstrates the effectiveness of our proposed approach, i.e., the direct addition operation. | 0 |
A key task in natural language generation is referring expression generation (REG). Most work on REG is aimed at producing distinguishing descriptions: descriptions that uniquely characterize a target object in a visual scene (e.g., "the red sofa"), and do not apply to any of the other objects in the scene (the distractors). The first step in generating such descriptions is attribute selection: choosing a number of attributes that uniquely characterize the target object. In the next step, realization, the selected attributes are expressed in natural language. Here we focus on the attribute selection step. We investigate to which extent attribute selection can be done in a language independent way; that is, we aim to find out if attribute selection algorithms trained on data from one language can be successfully applied to another language. The languages we investigate are English and Dutch.Many REG algorithms require training data, before they can successfully be applied to generate references in a particular domain. The Incremental Algorithm (Dale and Reiter, 1995) , for example, assumes that certain attributes are more preferred than others, and it is assumed that determining the preference order of attributes is an empirical matter that needs to be settled for each new domain. The graph-based algorithm (Krahmer et al., 2003) , to give a second example, similarly assumes that certain attributes are preferred (are "cheaper") than others, and that data are required to compute the attribute-cost functions.Traditional text corpora have been argued to be of restricted value for REG, since these typically are not "semantically transparent" (van Deemter et al., 2006) . Rather what seems to be needed is data collected from human participants, who produce referring expressions for specific targets in settings where all properties of the target and its distractors are known. Needless to say, collecting and annotating such data takes a lot of time and effort. So what to do if one wants to develop a REG algorithm for a new language? Would this require a new data collection, or could existing data collected for a different language be used? Clearly, linguistic realization is language dependent, but to what extent is attribute selection language dependent? This is the question addressed in this paper.Below we describe the English and Dutch corpora used in our experiments (Section 2), the graph-based algorithm we used for attribute selection (Section 3), and the corpus-based attribute costs and orders used by the algorithm (Section 4). We present the results of our cross-linguistic attribute selection experiments (Section 5) and end with a discussion and conclusions (Section 6). | 0 |
Lexical translation is the task of translating individual words or phrases, either on their own (e.g., search-engine queries or meta-data tags) or as part of a knowledge-based Machine Translation (MT) system. In contrast with statistical MT, lexical translation does not require aligned corpora as input. Because large aligned corpora are non-existent for many language pairs, and are very expensive to generate, lexical translation is possible for a much broader set of languages than statistical MT.While lexical translation has a long history (cf. (Helmreich et al., 1993; Copestake et al., 1994; Hull and Grefenstette, 1996) ), interest in it peaked in the 1990's. Yet, as this paper shows, the proliferation of Machine-Readable Dictionaries (MRDs) and the rapid growth of multi-lingual Wiktionaries offers the opportunity to scale lexical translation to an unprecedented number of languages. Moreover, the increasing international adoption of the Web yields opportunities for new applications of lexical translation systems.This paper presents a novel approach to lexical translation based on the translation graph. A node in the graph represents a word in a particular language, and an edge denotes a word sense shared between words in a pair of languages. Our TRANSGRAPH system automatically constructs a graph from a collection of independently-authored, machine-readable bilingual dictionaries and multi-lingual Wiktionaries as described in Section 2. Figure 1 shows an example translation graph.When all the edges along a path in the translation graph share the same word sense, then the path denotes a correct translation between its end points. When word senses come from distinct dictionaries, however, we are uncertain about whether the senses are the same or not. Thus, we define an inference procedure that computes the probability that two edges denote the same word sense and use this probability, coupled with the structure of the graph, to compute the probability that a path denotes a correct translation.Before we consider lexical translation in more detail, we need to ask: is lexical translation of any practical utility? While it does not solve the full machine-translation problem, lexical translation is valuable for a number of practical tasks including the translation of search queries, meta-tags, and individual words or phrases. For example, Google and other companies have fielded WordTranslator tools that allow the reader of a Web page to view the translation of particular word, which is helpful if you are, say, a Japanese speaker reading an English text and you come across an unfamiliar word.In the case of image search, the utility of lexical translation is even more readily apparent. Google retrieves images based on the words in their "vicinity", which limits the ability of a searcher to retrieve them. Although images are universal, an English searcher will fail to find images tagged in Chinese, and a Dutch searcher will fail to find images tagged in English. To address this problem, we built the PANIMAGES cross-lingual image search engine. 1 PANIM-AGES enables searchers to translate and disambiguate their queries before sending them to Google. PANIMAGES utilizes the translation graph; thus it also enables us to evaluate the quality of translations inferred from the graph in the context of a practical application.The key contributions of the paper are as follows:• We introduce the translation graph, a unique, automatically-constructed lexical resource, which currently consists of over 1.2 million words with over 2.3 million edges indicating possible translations.• We formalize the problem of lexical translation as probabilistic inference over the translation graph and quantify the gain of inference over merely looking up translations in the source dictionaries.• We identify a set of challenges in searching the Web for images, and introduce PANIMAGES, a crosslingual image search application that is deployed on the Web to address these challenges.• We report on experiments that show how PANIMAGES substantially increases image precision and recall for queries in "minor" languages, thereby demonstrating the utility of PANIMAGES and the translation graph.The remainder of the paper is organized as follows. Section 2 introduces the translation graph. Section 3 describes PANIMAGES, our cross-lingual image search application. Section 4 reports statistics on the translation graph and evaluates the utility of the graph by reporting on the precision and recall of the PANIMAGES application. Section 5 discusses related work, followed by conclusions and future work in Section 6. | 0 |
Existing research has shown the usefulness of public sentiment in social media across a wide range of applications. Several works showed social media as a promising tool for stock market prediction (Bollen et al., 2011; Ruiz et al., 2012; Si et al., 2013) . However, the semantic relationships between stocks have not yet been explored. In this paper, we show that the latent semantic relations among stocks and the associated social sentiment can yield a better prediction model. On Twitter, cash-tags (e.g., $aapl for Apple Inc.) are used in a tweet to indicate that the tweet talks about the stocks or some other related information about the companies. For example, one tweet containing cash-tags: $aapl and $goog (Google Inc.), is "$AAPL is loosing customers. everybody is buying android phones! $GOOG". Such joint mentions directly reflect some kind of latent relationship between the involved stocks, which motivates us to exploit such information for the stock prediction.We propose a notion of Semantic Stock Network (SSN) and use it to summarize the latent semantics of stocks from social discussions. To our knowledge, this is the first work that uses cash-tags in Twitter for mining stock semantic relations. Our stock network is constructed based on the co-occurrences of cash-tags in tweets. With the SSN, we employ a labeled topic model to jointly model both the tweets and the network structure to assign each node and each edge a topic respectively. Then, a lexicon-based sentiment analysis method is used to compute a sentiment score for each node and each edge topic. To predict each stock's performance (i.e., the up/down movement of the stock's closing price), we use the sentiment time-series over the SSN and the price time series in a vector autoregression (VAR) framework.We will show that the neighbor relationships in SSN give very useful insights into the dynamics of the stock market. Our experimental results demonstrate that topic sentiments from close neighbors of a stock can help improve the prediction of the stock market markedly. | 0 |
Around the 1980s the computational exploitation of machine-readable dictionaries (MRDs) for the automatic acquisition of lexical and semantic information enjoyed a great favor in NLP (Calzolari et al., 1973; Chodorow et al., 1985) . MRDs' definitions provided robust and structured knowledge from which semantic relations were automatically extracted for linguistic studies (Markowitz et al., 1986) and linguistic resources development (Calzolari, 1988) . Today the scenario has changed as corpora have become the main source for semantic knowledge acquisition. However, dictionaries are regaining some interest thanks to the availability of public domain dictionaries, especially Wiktionary.In the present work, we describe a method to create a morphosemantic and morphological French lexicon from Wiktionary's definitions. This type of large coverage resource is not available for almost all languages, with the exception of the CELEX database (Baayen et al., 1995) for English, German and Dutch, a paid resource distributed by the LDC.The paper is organized as follows. Section 2 reports related work on semantic and morphological acquisition from MRDs. In Section 3, we describe how we converted Wiktionnaire, the French language edition of Wiktionary, into a structured XML-tagged MRD which contains, among other things, definitions and morphological relations. In Section 4, we explain how we used Wiktionnaire's morphological sections to create a lexicon of morphologically related words. The notion of morphological definitions and their automatic identification are introduced in Section 5. In Section 6, we show how these definitions enable us to acquire new derived words and enrich the initial lexicon. Finally, Section 7 describes an experiment where we semantically typed process nouns definitions. | 0 |
Evaluation in monolingual translation (Xu et al., 2015; Mani, 2009) and in particular in GEC (Tetreault and Chodorow, 2008; Madnani et al., 2011; Felice and Briscoe, 2015; Bryant and Ng, 2015; has gained notoriety for its difficulty, due in part to the heterogeneity and size of the space of valid corrections (Chodorow et al., 2012; Dreyer and Marcu, 2012) . Reference-based evaluation measures (RBM) are the common practice in GEC, including the standard M 2 (Dahlmeier and Ng, 2012) , GLEU and I-measure (Felice and Briscoe, 2015) .The Low Coverage Bias (LCB) was previously discussed by Bryant and Ng (2015) , who showed that inter-annotator agreement in producing ref-erences is low, and concluded that RBMs underestimate the performance of GEC systems. To address this, they proposed a new measure, Ratio Scoring, which re-scales M 2 by the interannotator agreement (i.e., the score of a human corrector), interpreted as an upper bound.We claim that the LCB has more far-reaching implications than previously discussed. First, while we agree with Bryant and Ng (2015) that a human correction should receive a perfect score, we show that LCB does not merely scale system performance by a constant factor, but rather that some correction policies are less prone to be biased against. Concretely, we show that by only correcting closed class errors, where few possible corrections are valid, systems can outperform humans. Indeed, in Section 2.3 we show that some existing systems outperform humans on M 2 and GLEU, while only applying few changes to the source.We thus argue that the development of GEC systems against low coverage RBMs disincentivizes systems from making changes to the source in cases where there are plentiful valid corrections (open class errors), as necessarily only some of them are covered by the reference set. To support our claim we show that (1) existing GEC systems under-correct, often performing an order of magnitude less corrections than a human does ( §3.2);(2) increasing the number of references alleviates under-correction ( §3.3); and (3) under-correction is more pronounced in error types that are more varied in their valid corrections ( §3.4).A different approach for addressing LCB was taken by (Bryant and Ng, 2015; , who propose to increase the number of references (henceforth, M ). In Section 2 we estimate the distribution of corrections per sentence, and find that increasing M is unlikely to overcome LCB, due to the vast number of valid corrections for a sentence and their long-tailed distribution. Indeed, even short sentences have over 1000 valid corrections on average. Empirically assessing the effect of increasing M on the bias, we find diminishing returns using three standard GEC measures (M 2 , accuracy and GLEU), underscoring the difficulty in this approach.Similar trends are found when conducting such experiments to Text Simplification (TS) ( §4). Specifically we show that (1) the distribution of valid simplifications for a given sentence is longtailed; (2) common measures for TS dramatically under-estimate performance; (3) additional references alleviate this under-prediction.To recap, we find that the LCB hinders the reliability of RBMs for GEC, and incentivizes systems developed to optimize these measures not to correct. LCB cannot be overcome by re-scaling or increasing M in any feasible range. | 0 |
Parallel corpora, that is, collections of documents that are mutual translations, are used in many natural language processing applications, particularly for statistical machine translation. Building such resources is however exceedingly expensive, requiring highly skilled annotators or professional translators (Preiss, 2012) . Comparable corpora, that are sets of texts in two or more languages without being translations of each other, are often considered as a solution for the lack of parallel corpora, and many techniques have been proposed to extract parallel sentences (Munteanu et al., 2004; Abdul-Rauf and Schwenk, 2009; Smith et al., 2010) , or mine word translations (Fung, 1995; Rapp, 1999; Chiao and Zweigenbaum, 2002; Morin et al., 2007; Vulić and Moens, 2012) .Identifying comparable resources in a large amount of multilingual data remains a very challenging task. The purpose of the Building and Using Comparable Corpora (BUCC) 2015 shared task 1 is to provide the first evaluation of existing approaches for identifying comparable resources. More precisely, given a large collection of Wikipedia pages in several languages, the task is to identify the most similar pages across languages.1 https://comparable.limsi.fr/bucc2015/ In this paper, we describe the system that we developed for the BUCC 2015 shared track and show that a language agnostic approach can achieve promising results. | 0 |
Collaboration, a coordinated process involving two or more individuals participating in a task in an interdependent way, is an important topic of study given its importance as a major 21st century skill (Lai et al., 2017; Council, 2011; Rios et al., 2020) . Though collaboration as a general term is viewed as a learnable competency, notable distinctions emerge when examining how collaboration surfaces within relevant research. One semantic distinction is that the term collaboration is not explicitly defined, or is used interchangeably with concepts such as group collaboration, teamwork, collective problem solving, cooperation, and more (OECD, 2015) . These inconsistencies in meaning make it challenging to connect various research agendas that purport the advantages of collaboration. Another distinction to note is modalityrelated. Some research does not make any modality distinctions when reporting the impacts of results, though much has viewed collaboration via online/computer-mediated interactions, both synchronous and asynchronous, while other research has examined co-located collaborative acts that happen face-to-face. Despite semantic, modality, and other distinctions, various fields have advanced what we know about collaboration, specifically collaboration as a language-mediated process.Scholars within the fields of NLP, cognitive science, and educational research have focused separately on verbal and written aspects of collaborative exchanges -speech, text-based outputs, and audio such as non-linguistic pauses -to better understand aspects of collaboration. Recent NLP research, for example, has explored neural models equipped with dynamic knowledge graph embeddings, the use of large language models to model real world speech, and the development of collaboration datasets (Ekstedt and Skantze, 2020; He et al., 2017; Lee et al., 2022) , while cognitive science has explored general modeling approaches for collaborative behavior and large language models as knowledge sources for intelligent agents (Goldstone and Gureckis, 2009; Huang et al., 2022; Wray et al., 2021) . Learning analytics, a subset of educational research that extracts diverse datastreams from the learning process to improve learning, has developed automated multimodal approaches to detect, model and provide feedback about collaborative learning exchanges (Dowell et al., 2019; Pugh et al., 2022; Worsley and Ochoa, 2020) . Though these studies differ in their disciplinary perspectives, they view language as essential to individuals' application of collaborative behavior and researchers' understanding of said behavior. | 0 |
Recent advances in neural machine translation (NMT) have achieved remarkable success over the stateof-the-art of statistical machine translation (SMT) on various language pairs (Bahdanau et al., 2015; Jean et al., 2015; Luong et al., 2015; Wu et al., 2016; Vaswani et al., 2017) . In the neural networks of seq2seq models, either RNN-based (Bahdanau et al., 2015) , CNN-based (Gehring et al., 2017) , or full attention-based (Vaswani et al., 2017) , there exist many scenarios in both encoder and decoder where a weighted sum model (WSM) takes a set of inputs and generate one output. As shown in Eq. 1, the WSM first combines k inputs (x 1 , • • • , x k ) with k respective weights (w 1 , • • • , w k ) and then non-linearizes it through an activation function f , such as tanh, sigmoid function, ReLU , and so on. In this paper we omit bias terms to make the equations less cluttered.EQUATIONNote that the above weights (w 1 , • • • , w k ) are independent of each other and once the model is tuned, the weights are fixed for all inputs, suggesting that by ignoring different needs of the inputs, the WSM lacks effective control on the influence of each input.Let us take a concrete scenario in seq2seq model as an example. Figure 1 (a) shows a typical illustration of generation of t-th target word where the decoder takes three inputs, i.e., source context c t , previous target word y t−1 and current target state s t while generating one output y t via output state o t . However, the study in suggests that different target words require inconsistent contributions from source context (c t ) and target context (i.e., y t−1 and s t ). For example, to generate translation Xinhua News Agency , Hong Kong, the first word y 1 xinhua is highly related to its source context c 1 while the second and third words y 2 news and y 3 agency are mainly influenced by target context due to the wellformed saying Xinhua News Agency. Similarly, y 5 Hong and y 6 Kong are mainly influenced by source context and target context, respectively. In this paper, we propose adaptive weighting to dynamically control the contribution of each input in a WSM that has a set of inputs. Unlike the conventional weights that are independent of each other, adaptive weights are learned to be dependent on each other and more importantly be able to dynamically select the amount of input information. Specifically, we use gate mechanism to incorporate adaptive weights in GRU and in computing the output states. Experimentation on both Chinese-to-English and English-to-German translation tasks demonstrates that NMT systems with adaptive weighting are able to much improve the translation accuracy. Moreover, through adaptive weights we discuss in-depth on what type of information encoded in the encoder and how information influences the generation of a target word. | 0 |
Many inference algorithms require models to make strong assumptions of conditional independence between variables. For example, the Viterbi algorithm used for decoding in conditional random fields requires the model to be Markovian. Strong assumptions are also made in the case of McDonald et al.'s (2005b) non-projective dependency parsing model. Here attachment decisions are made independently of one another 1 . However, often such assumptions can not be justified. For example in dependency parsing, if a subject has already been identified for a given verb, then the probability of attaching a second subject to the verb is zero. Similarly, if we find that one coordination argument is a noun, then the other argu-ment cannot be a verb. Thus decisions are often co-dependent.Integer Linear Programming (ILP) has recently been applied to inference in sequential conditional random fields (Roth and Yih, 2004) , this has allowed the use of truly global constraints during inference. However, it is not possible to use this approach directly for a complex task like non-projective dependency parsing due to the exponential number of constraints required to prevent cycles occurring in the dependency graph. To model all these constraints explicitly would result in an ILP formulation too large to solve efficiently (Williams, 2002) . A similar problem also occurs in an ILP formulation for machine translation which treats decoding as the Travelling Salesman Problem (Germann et al., 2001) .In this paper we present a method which extends the applicability of ILP to a more complex set of problems. Instead of adding all the constraints we wish to capture to the formulation, we first solve the program with a fraction of the constraints. The solution is then examined and, if required, additional constraints are added. This procedure is repeated until all constraints are satisfied. We apply this dependency parsing approach to Dutch due to the language's non-projective nature, and take the parser of McDonald et al. (2005b) as a starting point for our model.In the following section we introduce dependency parsing and review previous work. In Section 3 we present our model and formulate it as an ILP problem with a set of linguistically motivated constraints. We include details of an incremental algorithm used to solve this formulation. Our experimental set-up is provided in Section 4 and is followed by results in Section 5 along with runtime experiments. We finally discuss fu- Figure 1 : A Dutch dependency tree for "I'll come at twelve and then you'll get what you deserve" ture research and potential improvements to our approach. | 0 |
Detecting abusive language is important for two substantive reasons. First is the mitigation of harm to individuals. Exposure to hate speech can result in a wide range of psychological effects, including degradation of mental health, depression, reduced self-esteem, and greater stress expression (Saha et al., 2019; Tynes et al., 2008; Boeckmann and Liew, 2002) . Second is the broader impact of unregulated speech on the participation gap in social media (Jenkins, 2009; Notley, 2009) . Overexposure to hateful language results in user desensitization (Soral et al., 2018) and radicalization (Norman and † These authors made equal contributions. Mikhael, 2017) , both of which have been shown to worsen racial relations (Sène, 2019) . Moreover, hateful echo-chambers promote a "spiral of silence" that discourages counter-speech in conversations online (Duncan et al., 2020) .Access to large-scale training data is the first step towards robust automated systems for abusive language detection. While industry researchers can access moderator logs and user reports, proprietary data is not the standard for academics. Instead, pejorative keywords are commonly used as filters in the data collection process. These include, but are not limited to, slurs and other curated lists of profane language (Waseem and Hovy, 2016; Waseem, 2016; Khodak et al., 2018; Rezvan et al., 2018) , terms borrowed from Hatebase, a multilingual repository for hate speech (Silva et al., 2016; Davidson et al., 2017; Founta et al., 2018; ElSherief et al., 2018) , offensive hashtags (Chatzakou et al., 2017; Golbeck et al., 2017) , and manually selected threads or subreddits (Gao and Huang, 2017; Hammer et al., 2019; Qian et al., 2019) . Although the drawbacks of keyword-based approaches are known to researchers, there are currently no clear alternatives to this technique (Waseem and Hovy, 2016; Davidson et al., 2017; ElSherief et al., 2018) .There has been a recent focus on how technical choices involving data curation can introduce systemic bias in the resultant corpus. For instance, Wiegand et al. (2019) discover that terms like football, announcer, and sport have the strongest correlation to abusive posts in Waseem and Hovy (2016) . Furthermore, Davidson et al. (2019) , Xia et al. (2020) and Sap et al. (2019) reveal how classifiers trained on data with systemic racial bias have a higher tendency to label text written in African-American English as abusive. Cited examples include: "Wussup, nigga!", and "I saw his ass yesterday". Left unaddressed, bias has a real impact on users. Automated recruiting tools used by Amazon.com were shown to discriminate against women (Cook, 2018) . Similarly, Microsoft released a public chatbot that learned to share racist content on Twitter (Vincent, 2016) . A common solution is to debias language representations (Bolukbasi et al., 2016) . However, these methods conceal but do not remove systemic bias in the overall data (Gonen and Goldberg, 2019) .A way of beginning to address the issue of racial and gender bias is therefore to understand the implications of forced sampling. Our paper focuses specifically on data that is collected using derogatory keywords and we make two main contributions to this end. First, we provide an annotation guide that outlines 4 main categories of online slur usage, which we further divide into a total of 12 subcategories. Second, we present a publicly available corpus based on our taxonomy, with 39.8k human annotated comments extracted from Reddit. We also propose an approach to data collection and annotation that prioritizes inclusivity both by design and application:Inclusivity by Design: Data selection and annotation achieves weighted group representation. We sample from a variety of subreddits in order to capture non-derogatory slur usage. We then hire a diverse set of coders under strict ethical standards as a means of engaging the perspectives of various target communities. We encourage opinion diversity by pairing annotators into teams based on maximum demographic differences.Inclusivity by Application: Our coding guidelines are extensible to language that targets multiple protected groups. We collect data using the slurs: faggot, a pejorative term used primarily to refer to gay men, nigger, an ethnic slur typically directed at black people, especially African Americans, and tranny, a derogatory slur for a transgender person. This is only time we mention the actual slurs. From hereon, We refer to each term as the f-slur, n-slur, and t-slur, respectively. We specifically choose these slurs because they enable us to study discrimination across sexuality, ethnicity, and gender.Our work does not directly eliminate bias in existing datasets. Rather, it aids in truly understanding the different ways in which slurs can be used online so that models can be trained and assessed more effectively. | 0 |
Most statistical machine translation (SMT) systems (e.g. phrase-based, n-gram-based) extract their translation models from word alignment trained in a previous stage. Many papers have shown that alignment quality is poorly correlated with MT quality (for example Vilar et al. (2006) ). Then, we can tune the alignment directly according to MT metrics (Lambert et al., 2007) . In this paper we rather try to find out which alignment characteristics help or worsen translation.In the related papers (see next section) some alignment characteristics are usually considered, and the impact on MT of alignments with different values for these characteristics is evaluated. The contributions of this paper are twofold. Firstly, the problem is considered from the inverse point of view: we start from an initial alignment and tune it directly according to a translation quality metric (BLEU score (Papineni et al., 2002) ) and according to an alignment quality metric (F-score, see Section 4.3) . In this way, we can investigate for any alignment characteristic how it is affected by the change of tuning criterion. If there exist alignment characteristics which are helpful in translation, they should not depend on the aligner used. However, they could depend on the MT system, the language pair, or the corpus size or type. The second contribution of this paper is to study more systematically how the considered characteristics depend on these parameters. We report results for two different SMT systems: a phrase-based system (Moses (Koehn et al., 2007) ) and an n-gram-based system (Crego and Mariño, 2007) . We performed this comparison on two different tasks: translation from Chinese to English, trained with IWSLT data (BTEC corpus, a small corpus in the travelling domain), and translation from Spanish to English, trained on a 2.7 million word corpus of the European Parliament proceedings.First we discuss related work. In Section 3, we describe the alignment optimisation procedures according to F-score and BLEU, and give more details on the alignment system used. Then in Section 4, we provide a summary of the experiments performed on each task, together with a description of the data used. In Section 5, the results are discussed. Finally, some conclusions are provided together with avenues for further research. | 0 |
Under-resourced languages still present a great challenge for the speech processing community. Slovenian belongs to such a group of under-resourced languages and is, with 2 million speakers, one of the smallest official EU languages. The development of Slovenian speech technology systems started in the end of 80's. Special emphasis was given to the development of spoken language resources. During these years, comparable categories of language resources to the main ones for English were built: SNABI and GOPOLIS, (TIMIT, Resource Management like), 1000 FDB SpeechDat (II) and PoliDat (SpeechDat like), BNSI Broadcast News and SiBN Broadcast News. The common characteristics of these language resources are that they were manually annotated and transcribed. This presents a time consuming and expensive task, especially for under-resourced languages like Slovenian. The first Slovenian language resource that differs from this approach was SloParl with parliamentary debates, which was based on imperfect transcriptions generated in parliament. The size of these Slovenian language resources is approximately 200 hours of speech, which is significantly less than for frequently spoken world languages. Additional drawback for developing speech recognition systems is the fact that Slovenian language belongs to the group of highly-inflected languages. This group of languages usually needs significantly more spoken training material to successfully develop high quality large vocabulary continuous speech recognition systems. In order to increase the amount of available language resources for Slovenian automatic speech recognition and other speech technology systems, the decision was made to build a new Slovenian resource based on TEDx Talks (Technology, Entertainment, Design) using automatic acoustic classification and large vocabulary continuous speech recognition. The costs and time usually needed to build such a speech resource were reduced with annotating and transcribing the acquired speech mainly in an unsupervised way. The paper is organized as follows. The Section 2 presents the acquisition process. The automatic segmentation and transcription are described in Section 3. The manual annotation and transcription of development and evaluation set are presented in Section 4. The finalized speech database is described in Section 5, while the conclusion is given in Section 6. | 0 |
The morphological processing of languages is indispensable for most applications in Human Language Technology. Usually, morphological models and their implementations are the primary building blocks in NLP systems.The development of the computational morphology of a given language has two main stages. The first stage is the building of the morphological database itself. The second stage includes applications of the morphological database in different processing tasks. The interaction and mutual prediction between the two stages determines the linguistic and computational decision-making of each stage.Bulgarian computational morphology has developed as the result of local (Paskaleva et al. 1993; Popov et al. 1998) and international activities for the compilation of sets of morphosyntactic distinctions and the construction of electronic lexicons (Dimitrova et al. 1998) . The need for synchronization and standardization has lead to the activities of the application to Bulgarian of internationally acknowledged guidelines for morphosyntactic annotation (Slavcheva and Paskaleva 1997) , and to the comparison of morphosyntactic tagsets (Slavcheva 1997) .In this paper I demonstrate the production scenario of modelling morphological knowledge in Bulgarian and applications of the created data sets in an integrated framework for production and manipulation of language resources, that is, the Bul-TreeBank framework . The production scenario is exemplified by the Bulgarian verb as the morphologically richest and most problematic part-of-speech category. The definition of the set of morphosyntactic specifications for verbs in the lexicon is described. The application of the tagset in the automatic morphological analysis of text corpora is accounted for. Special attention is drawn to the attachment of short pronominal elements to verbs. This is a phenomenon difficult to handle in language processing due to its intermediate position between morphology and syntax proper.The paper is structured as follows. In section 2 the principles of building the latest version of a Bulgarian tagset are pointed out and the subset of the tagset for verbs is exhaustively presented. Section 3 is dedicated to a specially worked out typology of Bulgarian verbs which is suitable for handling the problematic verb forms. | 0 |
Initial development of rule-based parsers 1 is often guided by the grammar writer's knowledge of the language and test suites that cover the "core" linguistic phenomena of the language (Nerbonne et al., 1988; Cooper et al., 1996; Lehmann et al., 1996) . Once the basic grammar is implemented, including an appropriate lexicon, the direction of grammar development becomes less clear. Integration of a grammar in a particular application and the use of a particular corpus can guide grammar development: the corpus and application will require the implementation of specific constructions and lexical items, as well as the reevaluation of existing analyses. To streamline this sort of output-driven development, tools to examine parser output over large corpora are necessary, and as corpus size increases, the efficiency and scalability of those tools become crucial concerns. Some immediate relevant questions for the grammar writer include:• What constructions and lexical items need to be added for the application and corpus in question?• For any potential new construction or lexical item, is it worth adding, or would it be better to fall back to robust techniques?• For existing analyses, are they applying correctly, or do they need to be restricted, or even removed?In the remainder of this section, we briefly discuss some existing techniques for guiding largescale grammar development and then introduce the grammar being developed and the tool we use in examining the grammar's output. The remainder of the paper discusses development of lexical resources and grammar rules, how overall progress is tracked, and how analysis of the grammar output can help development in other natural language components.There are several techniques currently being used by grammar engineers to guide large-scale grammar development, including error mining to detect gaps in grammar coverage, querying tools for gold standard treebanks to determine frequency of linguistic phenomena, and tools for querying parser output to determine how linguistic phenomena were analyzed in practice.An error mining technique presented by van Noord (2004) (henceforth: the van Noord Tool) can reveal gaps in grammar coverage by comparing the frequency of arbitrary n-grams of words in unsuccessfully parsed sentences with the same n-grams in unproblematic sentences, for large unannotated corpora. 2 A parser can be run over new text, and a comparison of the in-domain and out-of-domain sentences can determine, for instance, that the grammar cannot parse adjectivenoun hyphenation correctly (e.g. an electricalswitch cover). A different technique for error mining that uses discriminative treebanking is described in (Baldwin et al., 2005) . This technique aims at determining issues with lexical coverage, grammatical (rule) coverage, ungrammaticality within the corpus (e.g. misspelled words), and extragrammaticality within the corpus (e.g. bulleted lists).A second approach involves querying goldstandard treebanks such as the Penn Treebank (Marcus et al., 1994) and Tiger Treebank (Brants et al., 2004) to determine the frequency of certain phenomena. For example, Tiger Search (Lezius, 2002) can be used to list and frequencysort stacked prepositions (e.g. up to the door) or temporal noun/adverbs after prepositions (e.g. by now). The search tools over these treebanks allow for complex searches involving specification of lexical items, parts of speech, and tree configurations (see (Mírovský, 2008) for discussion of query requirements for searching tree and dependency banks).The third approach we discuss here differs from querying gold-standard treebanks in that corpora of actual parser output are queried to examine how constructions are analyzed by the grammar. For example, Bouma and Kloosterman (2002) use XQuery (an XML query language) to mine parse results stored as XML data. 3 It is this sort of examination of parser output that is the focus of the present paper, and specific examples of our experiences follow in Section 2.2.Use of such tools has proven vital to the development of large-scale grammars. Based on our experiences with them, we began extensively using a tool called Oceanography (Waterman, 2009) to search parser output for very large (approximately 125 million sentence) parse runs stored on a distributed file system. Oceanography queries the parser output and returns counts of specific constructions or properties, as well as the example sentences they were extracted from. In the subsequent sections we discuss how this tool (in conjunction with existing ones like the van Noord Tool and Tiger Search) has enhanced grammar development for an English-language Lexical-Functional Grammar used for a semantic search application over Wikipedia.The grammar being developed is a Lexical-Functional Grammar (LFG (Dalrymple, 2001 )) that is part of the ParGram parallel grammar project (Butt et al., 1999; Butt et al., 2002) . It runs on the XLE system (Crouch et al., 2009) and produces c(onstituent)-structures which are trees and f(unctional)-structures which are attribute value matrices recording grammatical functions and other syntactic features such as tense and number, as well as debugging features such as the source of lexical items (e.g. from a named entity finder, the morphology, or the guesser). There is a base grammar which covers the constructions found in standard written English, as well as three overlay grammars: one for parsing Wikipedia sentences, one for parsing Wikipedia headers, and one for parsing queries (sentential, phrasal, and keyword).The grammar is being used by Powerset (a Microsoft company) in a semantic consumer-search reference vertical which allows people to search Wikipedia using natural language queries as well as traditional keyword queries. The system uses a pipeline architecture which includes: text extraction, sentence breaking, named entity detection, parsing (tokenization, morphological analysis, cstructure, f-structure, ranking), semantic analysis, and indexing of selected semantic facts (see Figure 1) . A similar pipeline is used on the query side except that the resulting semantic analysis is turned into a query execution language which is used to query the index. The core idea behind using a deep parser in the pipeline in conjunction with the semantic rules is to localize role information as to who did what to whom (i.e. undo long-distance dependencies and locate heads of arguments), to abstract away from choice of particular lexical items (i.e. lemmatization and detection of synonyms), and generally provide a more normalized representation of the natural language string to improve both precision and recall.As a byproduct of the indexing pipeline, all of the syntactic and semantic structures are stored for later inspection as part of failure analysis. 4 The files containing these structures are distributed over several machines since ∼125 million sentences are parsed for the analysis of Wikipedia.For any given syntactic or semantic structure, the XLE ordered rewrite system (XFR; (Crouch et al., 2009) ) can be used to extract information that is of interest to the grammar engineer, by way of "rules" or statements in the XFR language. As the XFR ordered rewrite system is also used for the semantics rules that turn f-structures into semantic representations, the notation is familiar to the grammar writers and is already designed for manipulating the syntactic f-structures.However, the mechanics of accessing each file on each machine and then assembling the results is prohibitively complicated without a tool that provides a simple interface to the system. Oceanography was designed to take a single specification file stating:• which data to examine (which corpus version; full Wikipedia build or fixed 10,000 document set); • the XFR rules to be applied;• what extracted data to count and report back.Many concrete examples of Oceanography runs will be discussed below. The basic idea is to use the XFR rules to specify searches over lexical items, features, and constructions in a way that is similar to that of Tiger Search and other facilities. The Oceanography machinery enables these searches over massive data and helps in compiling the results for the grammar engineer to inspect. We believe that similar approaches would be feasible to implement in other grammar development environments and, in fact, for some grammar outputs and applications, existing tools such as Tiger Search would be sufficient. By providing examples where such searches have aided our grammar development, we hope to encourage other grammar engineers to similarly extend their efforts to use easy access to massive data to drive their work. | 0 |
Spoken language translation of Arabic language has been widely studied recently in different projects (DARPA TRANSTAC, GALE) or evaluation campaigns (IWSLT 1 , NIST 2 ). Most of the time, the rich morphology of Arabic language is seen as a problem that must be addressed, especially when dealing with sparse data. It has been shown that pre-processing Arabic data using a morphological segmenter is useful to improve machine translation results [1] [2] or automatic speech recognition performances [3] . If such a strategy is applied, the choice of the Arabic segmenter is very important since the Arabic segmentation heavily influences the translation quality: segmentation affects the translation models (alignments, phrase table) as well as the translation input.In a recent work [4] we conducted an in depth study of the influence of two Arabic segmenters on the translation quality of a phrase-based system using the moses 3 decoder. Examples of Arabic segmentations and associated translations are given in table 1 where correct segmentations and correct translations (both evaluated by a human expert) are in bold. While the correct segmentation may lead to the correct translation (cases 1, 2 and 7), we also observed some sentences for which none of the proposed segmentations is correct (cases 3 and 4). In those cases, the translation output might still be correct. One reason may be that an incorrect segmentation can remain consistent with the segmentation applied on the training data 1 See for instance http://mastarpj.nict.go.jp/IWSLT2009/ 2 See http://www.itl.nist.gov/iad/mig//tests/mt/ 3 http://www.statmt.org/moses/ (bad segmentation on the training data will probably lead to bad alignments but these errors may be somehow recovered during the phrase-table construction). Finally, we also observe cases (5 and 6) where a correct segmentation does not necessarily lead to the best translation output.Qualitative comparison of two Arabic segmentation methods (Buckwalter versus ASVM) for SMT. Correct segmentations and translations (human expertise) are bold-faced.Based on this analysis, we believe that using simultaneously multiple segmentations is a promising approach to improve machine translation of Arabic; this is the goal of the work described in this paper. A basic approach to implement this proposal would have been to build different MT systems using different segmentations of the Arabic training data and to combine their translations outputs. However, we think that it might be more interesting to leave the ambiguity of the Arabic segmentation at the input of the system (using a graph representation for instance). Then, the best segmentation should be chosen during the decoding step. We will describe this latter approach and discuss :-the mathematics of this multiple segmentation approach (section 2), -a practical implementation in the case of verbatim text translation ; confusion networks (CN) are used to represent the ambiguity of the Arabic segmentation at the input of the MT system (section 3), -problems and solutions to apply the multiple segmentation approach to spoken language translation using ASR lattices (section 4), -experiments to validate the approach in the framework of IWSLT evaluation campaigns (section 5),The last part of this paper (section 6) explains in detail the different systems submitted by LIG at IWSLT09 and the results obtained. | 0 |
A key problem in sentiment analysis is to determine the polarity of sentiment in text. Much of the work on this problem has considered binary sentiment polarity (positive or negative) at granularity levels ranging from sentences (Mao and Lebanon, 2006; McDonald et al., 2007) to documents (Wilson et al., 2005; Allison, 2008) . Multiway polarity classification, i.e., the problem of inferring the "star" rating associated with a review, has been attempted in several domains, e.g., restaurant reviews (Snyder and Barzilay, 2007) and movie reviews (Bickerstaffe and Zukerman, 2010; Pang and Lee, 2005) . Star ratings are more informative than positive/negative ratings, and are commonly given in reviews of films, restaurants, books and consumer goods. However, because of this finer grain, multi-way sentiment classification is a more difficult task than binary classification. Hence, the results for multi-way classification are typically inferior to those obtained for the binary case.Most of the research on sentiment analysis uses supervised classification methods such as Maximum Entropy (Berger et al., 1996) , Support Vector Machines (SVMs) (Cortes and Vapnik, 1995) or Naïve Bayes (NB) (Domingos and Pazzani, 1997) . The sentiment expressed in word patterns has been exploited by considering word ngrams (Hu et al., 2007) , applying feature selection to handle the resultant proliferation of features (Mukras et al., 2007) . In addition, when performing multi-way classification, approaches that consider class-label similarities (Bickerstaffe and Zukerman, 2010; Pang and Lee, 2005) generally outperform those that do not. Lexicon-based methods for sentiment analysis have been investigated in (Beineke et al., 2004; Taboada et al., 2011; Andreevskaia and Bergler, 2008; Melville et al., 2009) in the context of binary, rather than multi-way, sentiment classifiers. These methods often require intensive labour (e.g., via the Mechanical Turk service) to build up the lexicon (Taboada et al., 2011) or use a small, generic lexicon enhanced by sources from the Internet (Beineke et al., 2004) . Andreevskaia and Bergler (2008) and Melville et al. (2009) employ a weighted average to combine information from the lexicon with the classifi-cation produced by a supervised machine learning method. Their results demonstrate the effectiveness of these methods only on small datasets, where the contribution of the machine-learning component is limited.This paper examines the performance of a hybrid lexicon/supervised-learning approach and two supervised machine learning methods in multi-way sentiment analysis. The hybrid approach combines information obtained from the lexicon with information obtained from an NB classifier with feature selection. Information is obtained from a lexicon by means of a novel function based on the Beta distribution. This function, which employs heuristics to account for negations, adverbial modifiers and sentence connectives, combines the sentiment of words into the sentiment of phrases, sentences, and eventually an entire review (Section 2). The supervised learning methods are: an NB classifier with feature selection, and MCST (Bickerstaffe and Zukerman, 2010) -a state-of-the-art classifier based on hierarchical SVMs which considers label similarity (MCST outperforms Pang and Lee's (2005) best-performing methods on the Movies dataset described in Section 5.1).We also investigate the influence of three factors on sentiment-classification performance:(1) presence of sentiment-ambiguous sentences, which we identify by means of a heuristic (Section 4); (2) probability of the most probable star rating; and (3) coverage of the lexicon and the NB classifier, i.e., fraction of words in a review being "understood".Our results show that (1) the hybrid approach generally performs at least as well as NB with feature selection and MCST; (2) NB with feature selection generally outperforms MCST, highlighting the importance of choosing stringent baselines in algorithm evaluation; (3) the performance of sentiment analysis algorithms deteriorates as the number of sentiment-ambiguous sentences in a review increases, and improves as the probability of the most probable star rating of a review increases (beyond 50%), and as the coverage of the lexicon and the NB classifier increases (between 50% and 80%).In the next section, we present our lexiconbased approach. Section 3 describes the combination of the lexicon with an NB classifier, followed by our heuristic for identifying sentiment-ambiguous sentences. Section 5 presents the results of our evaluation, and Section 6 offers concluding remarks. | 0 |
Dysarthria is a set of congenital and traumatic neuro-motor disorders that impair the physical production of speech and affects approximately 0.8% of individuals in North America (Hosom et al., 2003) . Causes of dysarthria include cerebral palsy (CP), multiple sclerosis, Parkinson's disease, and amyotrophic lateral sclerosis (ALS). These impairments reduce or remove normal control of the primary vocal articulators but do not affect the abstract production of meaningful, syntactically correct language.The neurological origins of dysarthria involve damage to the cranial nerves that control the speech articulators (Moore and Dalley, 2005) . Spastic dysarthria, for instance, is partially caused by lesions in the facial and hypoglossal nerves, which control the jaw and tongue respectively (Duffy, 2005) , resulting in slurred speech and a less differentiable vowel space (Kent and Rosen, 2004) . Similarly, damage to the glossopharyngeal nerve can reduce control over vocal fold vibration (i.e., phonation), resulting in guttural or grating raspiness. Inadequate control of the soft palate caused by disruption of the vagus nerve may lead to a disproportionate amount of air released through the nose during speech (i.e., hypernasality).Unfortunately, traditional automatic speech recognition (ASR) is incompatible with dysarthric speech, often rendering such software inaccessible to those whose neuro-motor disabilities might make other forms of interaction (e.g., keyboards, touch screens) laborious. Traditional representations in ASR such as hidden Markov models (HMMs) trained for speaker independence that achieve 84.8% word-level accuracy for non-dysarthric speakers might achieve less than 4.5% accuracy given severely dysarthric speech on short sentences (Rudzicz, 2007) . Our research group is currently developing new ASR models that incorporate empirical knowledge of dysarthric articulation for use in assistive applications (Rudzicz, 2009) . Although these models have increased accuracy, the disparity is still high. Our aim is to understand why ASR fails for dysarthric speakers by understanding the acoustic and articulatory nature of their speech.In this paper, we cast the speech-motor interface within the mathematical framework of the noisychannel model. This is motivated by the charac-terization of dysarthria as a distortion of parallel biological pathways that corrupt motor signals before execution (Kent and Rosen, 2004; Freund et al., 2005) , as in the examples cited above. Within this information-theoretic framework, we aim to infer the nature of the motor signal distortions given appropriate measurements of the vocal tract. That is, we ask the following question: Is dysarthric speech a distortion of typical speech, or are they both distortions of some common underlying representation? | 0 |
The preposition into describes the path of motion event which typical involves an object, or figure, moves along the path to enter a reference object, or ground (Talmy, 2000) . An example of motion event is the caused-motion construction involving a verb (V) and two noun phrases (NP1 and NP2) as a direct and an indirect object, respectively. Sentence (1), extracted from the Penn Treebank Wall Street Journal (WSJ) Corpus 1 (Charniak, et al., 2000) , illustrates such a V NP1 into NP2 construction (shown in bold with lexical categories glossed underneath). The basic semantics of the construction involves a motion event that requires the direct object (NP1) to be moved and directed to the confinement of indirect object (NP2). In this case, an unspecified number of airplanes undergo movement towards a deictic space.(1) To shove even more airplanes into this space is asking for trouble, V NP1 Prep NP2 experts say. (WSJ-V1141) However, this type of prepositional phrases poses an ambiguity problem in parsing. Sentence (1) serves as an example for one means of parsing in which the preposition closely associates with the verb but not NP1. The second possibility of parsing is where the PP is required to be interpreted with NP1, as illustrated in bold in sentence (2).(2) And he soon became aware that the government was able to show a flow of millions of dollars in illicit funds into his account. V NP1 Prep NP2 Here the head of NP1 (a flow) is to be interpreted along with into and NP2 (his account), rather than with the preceding verb (to show). Computational linguists have found these two structures causing parsing problems in natural language processing (NLP) and referred to this problem of determining the site of PP to be attached as the PP attachment problem (e.g., Hindle and Rooth, 1993; Volk, 2006) . As illustrated in (1) and (2), this problem is conventionally formalized as a binary choice (Merlo and Ferrer, 2005) , either verb-attached for (1) or noun-attached for (2). In the minimalist syntax, ternary structures like (3a) are to be transformed by deriving an explicit causative construction (3b) (Radford, 2004) . The operation involves raising of the verb roll to join the causative verb made to adhere to a binary operation. (3a) He rolled the ball down the hill. V NP1 Prep NP2(3b) He made + roll the ball (roll) down the hill. V-causative + V NP1 trace Prep NP2 (Radford, 2004, p. 337, with gloss added) Although the plausibility of equating the two constructions has long been questioned (e.g., Fodor, 1970) , the causative structure (3b) cannot provide a direct solution to the PP attachment problem for (3a). Moreover, the binary solution to the problem has been challenged by computational linguists. For example, Merlo and Ferrer (2005) contend that such a dichotomous treatment may be a simplification. They propose to take into account of the nature of the attachment by distinguishing PP arguments from PP adjuncts. Sentence (4) is an example of two verb-attached PPs that maintain different relationships with the verb shown in the gloss. (4) Put the block on the table in the morning. V NP1 PP argument PP adjunct (Merlo and Ferrer, 2005, p. 342, with gloss added) Since PP arguments carry the core message and PP adjuncts provide additional information to the core meaning, their distinction further refines NLP tasks. Although studies like Merlo and Ferrer (2005) provide novel approaches to tackle the PP attachment problem, the notion of binary sites for PP attachment has not been scrutinized. The presupposition of binary attachment sites, however, may result in a forced selection from one of the two choices and may overlook other possibilities for correct parsing. Consider the construction in bold in sentence (5) for determining the PP attachment site. (5) Frank sneezed the tissue off the table V NP1 Prep NP2 (Goldberg, 1995, p. 152 , with gloss added) According to our first choice, verb-attached parsing, the verb sneezed is to be analyzed with NP1 the tissue. The grouping is semantically invalid since the verb is normally intransitive without a direct object. Yet, it is not any less awkward as the noun-attached option is considered (the tissue off the table). In Goldberg's (1995) seminal work on construction grammar, she discusses the basic semantics of caused-motion construction or that "the causer argument directly causes the theme argument to move along a path designated by the directional phrase: that is, 'X CAUSES Y to MOVE Z'" (p. 152). In brief, the caused-motion construction includes a directional phrase like into PP and entails a movement feature. However, (5) illustrates an atypical example of caused-motion construction where the construction fails to be interpreted through its components or what the PP attachment problem is based on. According to Goldberg, the semantic meaning of (5) can only be derived by taking into account of the entire construction. In other words, to address the PP attachment issue in sentences like (5), we need to take into account of a third possible structure in addition to a binary choice from verb-or nounattachment. 2 In this study, we take the construction grammar approach to reformalize the PP attachment problem. In addition to the conventional binary approach to determining the PP attachment sites, we suggest a third possible structure where the PP co-attaches to both verb and noun based on the construction grammar framework. We also develop a semantic analysis of the feature movement (denoted as [+movement] or [-movement]) for the verb and direct object in the V NP1 into NP2 construction to determine the PP attachment site. Our proposal examines the WSJ corpus data by means of manual annotation. | 0 |
Reformulations may occur in written and spoken languages, in which they show different functions (Flottum, 1995; Rossari, 1992) : in spoken language, they mark the elaboration of ideas, and are punctuated by hesitations, false starts, and repetitions (Blanche-Benveniste et al., 1991) , in written documents, we usually find the final result of the reformulation process (Hagège, 1985) . It is considered that reformulation is the activity of speakers built on their own linguistic production or on the one of their interlocutor, with or without specific markers. The objective is then to modify some aspects (lexical, syntactic, semantic, pragmatic) but to keep the semantic content constant (Gülich and Kotschi, 1987; Kanaan, 2011) . Specific reformulation markers may provide the formal mark-up of reformulations. Reformulation is closely related to paraphrases, in that way that reformulated sequences can produce the paraphrases (Neveu, 2004) . Reformulation and paraphrase play an important role in languages:• When studying languages, a common exercise consists of paraphrasing expressions in order to control their understanding by students;• In the same way, it is possible to control the understanding of ideas. The first exercises of the kind have appeared with the exegesis of ancient texts: sacred texts (Bible, Koran, Torah) first, and then theological, philosophical and scientific texts;• More naturally, speakers use the reformulation and paraphrase in order to precise and to better transmit their thoughts. It is also common to find reformulations in written language: between various versions of the same literary piece of work (Fuchs, 1982) , of the Wikipedia articles (Vila et al., 2014) , or of scientific articles. The authors can thus rewrite several times their text until they produce the one that suits them at last.Reformulation and paraphrase also play an important role in different NLP applications (Madnani and Dorr, 2010; Androutsopoulos and Malakasiotis, 2010; . The objective is to detect linguistic expressions that differ by their form but convey the same or similar meaning:• In information retrieval and extraction, paraphrases permit to increase the coverage of the found or extracted results. For instance, pairs like {myocardial infarction, heart attack} and {Alzheimer's disease, neurodegenerative disease} contain different expressions that convey identical or close semantics;• In machine translation, paraphrases permit to avoid lexical repetitions (Scarpa, 2010) ;• Textual entailment (Dagan et al., 2013) consists of creating relation between two textual segments, called Text and Hypothesis. Entailment is a directional relation, in which the truth of the Hypothesis must be inferred through the analysis of the Text. For instance, the Text The drugs that slow down or halt Alzheimer's disease work best the earlier you administer them allows inferring that the Hypothesis Alzheimer's disease is treated by drugs is true; while the Hypothesis Alzheimer's disease is cured by drugs cannot be inferred from this Text. In this example, the paraphrases {administer drugs, treated by drugs} permit to establish the right link between the Text and the Hypothesis.As these few examples indicate, reformulation and paraphrase may cover various linguistic phenomena. The corresponding classifications may be more or less complex: from 25 (Bhagat and Hovy, 2013) to 67 (Melčuk, 1988) categories. Most often, these classifications address one given aspect, such as linguistic characteristics (Melčuk, 1988; Vila et al., 2011; Bhagat and Hovy, 2013) , size of the paraphrased units (Flottum, 1995; Fujita, 2010; , knowledge required for understanding the paraphrastic relation (Milicevic, 2007) , language register. To our knowledge, there is only one multidimensional classification of paraphrase (Milicevic, 2007) . In our work, we also propose to use a multidimensional classification, that covers the following dimensions, some of which are inspired by the previous works (Gulich and Kotschi, 1983; Beeching, 2007; Vila et al., 2011) :• syntactic category of the reformulated segments,• type of lexical relation between the segments (e.g. hyperonymy, synonymy, antonymy, instance, meronymy),• type of lexical modification (e.g. replacement, removal, insertion),• type of morphological modification (i.e. inflection, derivation, compounding),• type of syntactic modification (e.g. passive/active way),• type of pragmatic relation between the reformulated segments (e.g. definition, explanation, precision, result, linguistic correction, referential correction, equivalence). | 0 |
Reproducibility is an utmost priority in research to ensure reliability of scientific findings. Informally, it describes the ability to repeat a study, beginning with the same starting point, using the same resources (if possible) and achieving the same results and conclusions (Pineau et al., 2020) . Reproducibility requires that approaches in publications be recorded in such a way that previously uninvolved parties can comprehend and recreate them (Fokkens et al., 2013) . However, reproducibility is a complex requirement which often fails because of missing details (like not described data sets or missing key parameters)-such aspects, even though they may appear minor at first sight, either prevent reproducibility altogether or at least distort the results (Raff, 2019; Wieling et al., 2018) . One reason for such failures of reproducibility may be lack of widely accepted definitions and practical conceptualization of reproducibility, as there is currently no consensus on how and to what level of detail research should be documented (Cohen et al., 2018) .The Shared Task ReproGen (Belz et al., 2021 ) deals with the reproducibility problem. In particular, it aims to investigate reproducibility of human evaluation. The findings of ReproGen should yield general insights into how reproducibility can be improved. The task in ReproGen is to replicate either one of the pre-selected studies or a self-selected study from the field of Natural Language Generation (NLG) and to document the findings.In this paper, we report on our reproducibility of the work "Generation of Company descriptions using concept-to-text and text-to-text deep models: dataset collection and systems evaluation" (CompDesc for short) by Qader et al. (2018) . This work analyzes multiple sequence-to-sequence models that were used to generate short company descriptions from Wikipedia articles. This includes both automatic and human evaluation which are then compared with each other. Our replication focuses on the human evaluation, in accordance with the general outline of ReproGen. | 0 |
Nowadays dialogue systems are becoming more and more ubiquitous in our lives. It is essential for such systems to perceive the environment, gather data and convey useful information to humans in an accessible fashion. Video question answering (VideoQA) systems provide a convenient way for humans to acquire visual information about the environment. If a user wants to obtain information about a dynamic scene, one can simply ask the VideoQA system a question in natural language, and the system generates a natural-language answer. The task of a VideoQA dialogue system in A person is packing a bag and then looking into the mirror.No, the person is a youngman.What room is this person in ?It looks like a bedroom or a dorm room.What color are the walls?The walls look like light purple. this paper is described as follows. Given a video as grounding evidence, in each dialogue turn, the system is presented a question and is required to generate an answer in natural language. Figure 1 shows an example of multi-turn VideoQA. It is composed of a video clip and a dialogue, where the dialogue contains open-ended question answer pairs regarding the scene in the video. In order to answer the questions correctly, the system needs to be effective at understanding the question, the video and the dialogue context altogether. Recent work on VideoQA has shown promising performance using multi-modal attention fusion for combination of features from different modalities (Xu et al., 2017; Zeng et al., 2017; Zhao et al., 2018; Gao et al., 2018) . However, one of the challenges is that the length of the video sequence can be very long and the question may concern only a small segment in the video. Therefore, it may be time inefficient to encode the entire video sequence using a recurrent neural network.In this work, we present the question-guided video representation module which learns 1) to summarize the video frame features efficiently using an attention mechanism and 2) to perform feature selection through a gating mechanism. The learned question-guided video representation is a compact video summary for each token in the question. The video summary and question information are then fused to create multi-modal representations. The multi-modal representations and the dialogue context are then passed as input to a sequence-to-sequence model with attention to generate the answer (Section 3). We empirically demonstrate the effectiveness of the proposed methods using the AVSD dataset (Alamri et al., 2019a) for evaluation (Section 4). The experiments show that our model for single-turn VideoQA achieves state-of-the-art performance, and our multi-turn VideoQA model shows competitive performance, in comparison with existing approaches (Section 5). | 0 |
The annotation of full text documents is a costly and time-consuming task. Thus, it is important to design annotation tools in such a way that the annotation process can happen as swiftly as possible. To this end, we extend WebAnno with the capability of suggesting annotations to the annotator.A general-purpose web-based annotation tool can greatly lower the entrance barrier for linguistic annotation projects, as tool development costs and preparatory work are greatly reduced. WebAnno 1.0 only partially fulfilled desires regarding generality: Although it covered already more kinds of annotations than most other tools, it supported only a fixed set of customizable annotation layers (named entities, part-of-speech, lemmata, coreference, dependencies). Thus, we also remove a limitation of the tool, which was previously bound to specific, hardcoded annotation layers.We have generalized the architecture to support three configurable generic structures: spans, relations, and chains. These support all of the original layers and allow the user to define arbitrary custom annotation layers based on either of these structures. Additionally, our approach allows maintaining multiple properties on annotations, e.g. to support morphological annotations, while previously only one property per annotation was supported.Automatic suggestion of annotations is based on machine learning, which is common practice in annotation tools. However, most of existing web-based annotation tools, such as GATE (Cunningham et al., 2011) or brat (Stenetorp et al., 2012) , depend on external preprocessing and postprocessing plugins or on web services. These tools have limitations regarding adaptability (difficulty to adapt to other annotation tasks), reconfigurability (generating a classifier when new features and training documents are available is complicated), and reusability (requires manual intervention to add newly annotated documents into the iteration).For our approach, we assume that an annotator actually does manually verify all annotations to produce a completely labeled dataset. This task can be sped up by automatically suggesting annotations that the annotator may then either accept or correct. Note that this setup and its goal differs from an active learning scenario, where a system actively determines the most informative yet unannotated example to be labeled, in order to quickly arrive at a high-quality classifier that is then to be applied to large amounts of unseen data.Our contribution is the integration of machine learning into the tool to support exhaustive an-notation of documents providing a shorter loop than comparable tools (Cunningham et al., 2011; Stenetorp et al., 2012) , because new documents are added to the training set as soon as they are completed by the annotators. The machine learning support currently applies to sequence classification tasks only. It is complemented by our extension allowing to define custom annotation layers, making it applicable to a wide range of annotation tasks with only little configuration effort.Section 2 reviews related work about the utilization of automatic supports and customization of annotation schemes in existing annotation tools. The integration of automatic suggestions into WebAnno, the design principles followed, and two case studies are explained in Section 3. Section 4 presents the implementation of customizable annotation layers into the tool. Finally, Section 5 summarizes the main contributions and future directions of our work. | 0 |
In the era of big data and deep learning there is an increasing need for large annotated corpora that can be used as training and evaluation data for (semi-)supervised methods. This can be seen by the vast amount of work introducing new datasets and techniques for (semi-)automatically annotating corpora. Different NLP tasks require different kinds of datasets and annotations and provide us with different challenges. One task that has lately gained much attention in the community is the task of Natural Language Inference (NLI) . NLI, also known as Recognizing Textual Entailment (RTE) , is the task of defining the semantic relation between a premise text p and a conclusion text c. p can a) entail, b) contradict or c) be neutral to c. The premise p is taken to entail conclusion c when a human reading p would infer that c is most probably true . This notion of "human reading" assumes human common sense and common background knowledge. This means that a successful automatic NLI system is a suitable evaluation measure for real natural language understanding, as discussed by Condoravdi et al. (2003) and others. It is also a necessary step towards reasoning as more recently discussed by Goldberg and Hirst (2017) and Nangia et al. (2017) who say that solving NLI perfectly means achieving human level understanding of language. Thus, there is an increasing effort to design high-performing NLI systems, which in turn leads to the creation of massive learning corpora. Early datasets, like FraCas (Consortium et al., 1996) or the seven RTE challenges Bar-Haim et al., 2006; Giampiccolo et al., 2007; Dagan et al., 2010; Bentivogli et al., 2009b Bentivogli et al., ,a, 2011 , contained a few hundred handannotated pairs. More recent sets have exploded from some thousand pairs (e.g., SICK, Marelli et al., 2014b) to some hundred thousand examples: SciTail (Khot et al., 2018) , SNLI (Bowman et al., 2015) , Multi-NLI (Williams et al., 2018) . The latter two have been vastly used to train learning algorithms and achieve high performance. However, it was recently shown that this high performance can drop significantly by slightly modifying the training process (Poliak et al., 2017; Glockner et al., 2018) . It was also shown that such training sets contain annotation artifacts that bias the learning (Gururangan et al., 2018; Naik et al., 2018) . Other recent work (Kalouli et al., 2017b discussed problematic annotations of the SICK corpus (Marelli et al., 2014b) and attempted to improve the annotations. All this work leads to the conclusion that corpus construction, including the annotation process, is much more important than what is often assumed and that bad corpora can falsely deliver promising results.In this paper we take a closer look at the work by Kalouli et al. (2017b,a) and attempt to build on the two conclusions that arise from their work. The first conclusion is that the guidelines for the NLI annotation task need be improved, as it seems clear that human annotators often have opposing perspectives when annotating for inference. This can result in faulty and illogical annotations. The second conclusion concerns the annotation procedure: having an inference label is not enough; knowing why a human subject decides that an inference is an entailment or a contradiction is useful information that we should also be collecting, if we want to make sure that the corpus created adheres to the guidelines given. Specifically, in this work we discuss an experiment, realized at the University of Colorado Boulder (CU), which attempts to address both these issues: provide uncontroversial, clear guidelines and give the annotators the chance to justify their decisions. Our goal is to evaluate the guidelines based on the resulting agreement rates and gain insights into the NLI annotation task by collecting the annotators' comments on the annotations. Thus, in the current work we make three contributions: Firstly, we discover which linguistic phenomena are hard for humans to annotate and show that these do not always coincide with what is assumed to be difficult for automatic systems. Then, we propose aspects of NLI and of the annotation task itself that should be taken into account when designing future NLI corpora and annotation guidelines. Thirdly, we show that it is essential to include a justification method in similar annotation tasks as a suitable way of checking the guidelines and improving the training and evaluation processes of automatic systems towards explainable AI. | 0 |
Some of the earliest research related to the problem of text segmentation into thematic episodes used the word distribution as an intrinsic feature of texts (Morris and Hirst, 1991) . The studies of (Reynar, 1994; Hearst, 1997; Choi, 2000) continued in this vein. While having quite different emphasis at different levels of detail (basically from the point of view of the employed term weighting and/or the adopted inter-block similarity measure), these studies analyzed the word distribution inside the texts through the instrumentality of merely one feature, i.e. the one-dimensional inter-block similarity.More recent work use techniques from graph theory (Malioutov and Barzilay, 2006) and machine learning (Galley et al., 2003; Georgescul et al., 2006; Purver et al., 2006) in order to find patterns in vocabulary use.We investigate new approaches for topic segmentation on corpora containing multi-party dialogues, which currently represents a relatively less explored domain. Compared to other types of audio content (e.g. broadcast news recordings), meeting recordings are less structured, often exhibiting a high degree of participants spontaneity and there may be overlap in finishing one topic while introducing another. Moreover while ending the discussion on a certain topic, there can be numerous new attempts to introduce a new topic before it becomes the focus of the dialogue. Therefore, the task of automatic topic segmentation of meeting recordings is more difficult and requires a more refined analysis. (Galley et al., 2003; Georgescul et al., 2007) dealt with the problem of topic segmentation of multiparty dialogues by combining various features based on cue phrases, syntactic and prosodic information. In this article, our investigation is based on using merely lexical features.We study mixture models in order to group the words co-occurring in texts into a small number of semantic concepts in an automatic unsupervised way. The intuition behind these models is that a text document has an underlying structure of "latent" topics, which is hidden. In order to reveal these latent topics, the basic assumption made is that words related to a semantic concept tend to occur in the proximity of each other. The notion of proximity between semantically related words can vary for various tasks. For instance, bigrams can be considered to capture correlation between words at a very short distance. At the other extreme, in the domain of document classification, it is often assumed that the whole document is concerned with one specific topic and in this sense all words in a document are considered to be semantically related. We consider for our application that words occurring in the same thematic episode are semantically related.In the following, the major issues we will discuss include the formulations of two probabilistic mixture approaches, their methodology, aspects of their implementation and the results obtained when applied in the topic segmentation context. Section 2 presents our approach on using probabilistic mixture models for topic segmentation and shows comparisons between these techniques. In Section 3 we discuss our empirical evaluation of these models for topic segmentation. Finally, some conclusions are drawn in Section 4. | 0 |
Use of social media has enabled the study of psychological and social questions at an unprecedented scale (Lazer et al., 2009) . This allows more data-driven discovery alongside the typical hypothesis-testing social science process (Schwartz et al., 2013b) . Social media may track disease rates (Paul and Dredze, 2011; Google, 2014), psychological well-being (Dodds et al., 2011; De Choudhury et al., 2013; Schwartz et al., 2013a) , and a host of other behavioral, psychological and medical phenomena (Kosinski et al., 2013) .Unlike traditional hypothesis-driven social science, such large-scale social media studies rarely take into account-or have access to-age and gender information, which can have a major impact on many questions. For example, females live almost five years longer than males (cdc, 2014; Marengoni et al., 2011) . Men and women, on average, differ markedly in their interests and work preferences (Su et al., 2009) . With age, personalities gradually change, typically becoming less open to experiences but more agreeable and conscientious (McCrae et al., 1999) . Additionally, social media language varies by age (Kern et al., 2014; Pennebaker and Stone, 2003) and gender (Huffaker and Calvert, 2005) . Twitter may have a male bias (Mislove et al., 2011) , while social media in general skew towards being young and female (pew, 2014) .Accessible tools to predict demographic variables can substantially enhance social media's utility for so-1 download at http://www.wwbp.org/data.html cial science, economic, and business applications. For example, one can post-stratify population-level results to reflect a representative sample, understand variation across age and gender groups, or produce personalized marketing, services, and sentiment recommendations; a movie may be generally disliked, except by people in a certain age group, whereas a product might be used primarily by one gender.This paper describes the creation of age and gender predictive lexica from a dataset of Facebook users who agreed to share their status updates and reported their age and gender. The lexica, in the form of words with associated weights, are derived from a penalized linear regression (for continuous valued age) and support vector classification (for binary-valued gender). In this modality, the lexica are simply a transparent and portable means for distributing predictive models based on words. We test generalization and adapt the lexica to blogs and Twitter, plus consider situations when limited messages are available. In addition to use in the computational linguistics community, we believe the lexicon format will make it easier for social scientists to leverage data-driven models where manually created lexica currently dominate 2 (Dodds et al., 2011; Tausczik and Pennebaker, 2010). | 0 |
Keyphrases are usually the selected phrases that can capture the main topics described in a given document (Turney, 2000) . They can provide users with highly condensed and valuable information, and there are a wide variety of sources for keyphrases, including web pages, research articles, books, and even movies. In contrast to keywords, keyphrases usually contain two or more words. Normally, the meaning representations of these phrases are more precise than those of single words. Moreover, along with the increasing development of the internet, this kind of summarization has received continuous consideration in recent years from both the academic and entiprise communities (Witten et al., 1999; Wan and Xiao, 2008; Jiang et al., 2009; Zhao et al., 2011; Tuarob et al., 2015 ).Because of the enormous usefulness of keyphrases, various studies have been conducted on the automatic extraction of keyphrases using different methods, including rich linguistic features (Barker and Cornacchia, 2000; Paukkeri et al., 2008) , supervised classification-based methods (Witten et al., 1999; Wu et al., 2005; Wang et al., 2006) , ranking-based methods (Jiang et al., 2009) , and clustering-based methods (Mori et al., 2007; Danilevsky et al., 2014) . These methods usually focus on extracting keyphrases from a single document or multiple documents. Typically, a large number of words exist in even a document of moderate length, where a few hundred words or more is common. Hence, statistical and linguistic features can be considered to determine the importance of phrases.In addition to the previously mentioned methods, a few researchers have studied the problem of extracting keyphrases from collections of tweets (Zhao et al., 2011; Bellaachia and Al-Dhelaan, 2012) . In contrast to traditional web applications, Twitter-like services usually limit the content length to 140 characters. In (Zhao et al., 2011) , the contextsensitive topical PageRank method was proposed to extract keyphrases by topic from a collection of tweets. NE-Rank was also proposed to rank keywords for the purpose of extracting topical keyphrases (Bellaachia and Al-Dhelaan, 2012) . Because multiple tweets are usually organized by topic, many document-level approaches can also be adopted to achieve the task. In contrast with the previous methods, Marujo et al. (2015) focused on the task of extracting keywords from single tweets. They used several unsupervised methods and word embeddings to construct features. However, the proposed method worked on the word level.In this study, we investigated the problem of automatically extracting keyphrases from single tweets. Compared to the problem of identifying keyphrases from documents containing hundreds of words, the problem of extracting keyphrases from a single short text is generally more difficult. Many linguistic and statistical features (e.g., the number of word occurrences) cannot be determined and used. Moreover, the standard steps of keyphrase extraction usually include keyword ranking, candidate keyphrase generation, and keyphrase ranking. Previous works usually used separate methods to handle these steps. Hence, the error of each step is propagated, which may highly impact the final performance. Another challenge of keyphrase extraction on Twitter is the lack of training and evaluation data. Manual labelling is a time-consuming procedure. The labelling consistency of different labellers cannot be easily controlled.To meet these challenges, in this paper, we propose a novel deep recurrent neural network (RNN) model for the joint processing of the keyword ranking, keyphrase generation, and keyphrase ranking steps. The proposed RNN model contains two hidden layers. In the first hidden layer, we capture the keyword information. Then, in the second hidden layer, we extract the keyphrases based on the keyword information using a sequence labelling method. In order to train and evaluate the proposed method, we also proposed a novel method to construct a dataset that contained a large number of tweets with golden standard keyphrases. The proposed dataset construction method was based on the hashtag definitions in Twitter and how these were used in specific tweets.The main contributions of this work can be summarized as follows:• We proposed a two-hidden-layer RNN-based method to jointly model the keyword ranking, keyphrase generation, and keyphrase ranking steps.• To train and evaluate the proposed method, we proposed a novel method for constructing a large dataset, which consisted of more than one million words.• Experimental results demonstrated that the proposed method could achieve better results than the current state-of-the-art methods for these tasks. | 0 |
Email for many users has evolved from a mere communication system to a means of organizing workflow, storing information and tracking tasks (i.e. "to do" items) (Bellotti et al., 2003; Cadiz et al., 2001) . Tools available in email clients for managing this information are often cumbersome or even so difficult to discover that users are not aware that the functionality exists. For example, in one email client, Microsoft Outlook, a user must switch views and fill in a form in order to create a task corresponding to the current email message. By automatically identifying tasks that occur in the body of an email message, we hope to simplify the use of email a s a tool for task creation and management.In this paper we describe SmartMail, a prototype system that automatically identifies tasks in email, reformulates them, and presents them to the user in a convenient interface to facilitate adding them to a "to do" list.SmartMail performs a superficial analysis of an email message to distinguish the header, message body (containing the new message content), and forwarded sections. 1 SmartMail breaks the message body into sentences, then determines the speech act of each sentence in the message body by consulting a machine-learned classifier. If the sentence is classified as a task, SmartMail performs additional linguistic processing to reformulate the sentence as a task description. This task description is then p resented to the user. | 0 |
Despite ever-increasing volumes of text documents available online, labelled data remains a scarce resource in many practical NLP scenarios. This scarcity is especially acute when dealing with resource-poor languages and/or uncommon textual domains. This lack of labelled datasets is also common in industry-driven NLP projects that rely on domain-specific labels defined in-house and cannot make use of pre-existing resources. Large pretrained language models and transfer learning (Peters et al., 2018 (Peters et al., , 2019 Lauscher et al., 2020) can to some extent alleviate this need for labelled data, by making it possible to reuse generic language representations instead of learning models from scratch.Step 1: labelling functions (heuristics, gazetteers, etc.) Step 2: aggregation (EM with generative model)Step 3: Training of final NLP model (on aggregated labels) … … O O B-PER … Figure 1 : General overview of skweak: labelling functions are first applied on a collection of texts (step 1) and their results are then aggregated (step 2). A discriminative model is finally trained on those aggregated labels (step 3). The process is illustrated here for NER, but skweak can in principle be applied to any type of sequence labelling or classification task.However, except for zero-shot learning approaches (Artetxe and Schwenk, 2019 ; Barnes and Klinger, 2019; Pires et al., 2019) , they still require some amounts of labelled data from the target domain to fine-tune the neural models to the task at hand.The skweak framework (pronounced /skwi:k/) is a new Python-based toolkit that provides solutions to this scarcity problem. skweak makes it possible to bootstrap NLP models without requiring any handannotated data from the target domain. Instead of labelling data by hand, skweak relies on weak supervision to programmatically label data points through a collection of labelling functions Lison et al., 2020; Safranchik et al., 2020a) . The skweak framework allows NLP practitioners to easily construct, apply and aggregate such labelling functions for classification and sequence labelling tasks. skweak comes with a robust and scalable aggregation model that extends the HMM model of Lison et al. (2020) . As detailed in Section 4, the model now includes a feature weighting mechanism to capture the correlations that may exist between labelling functions. The general procedure is illustrated in Figure 1 .Another novel feature of skweak is the ability to create labelling functions that produce underspecified labels. For instance, a labelling function may predict that a token is part of a named entity (but without committing to a specific label), or that a sentence does not express a particular sentiment (but without committing to a specific sentiment category). This ability greatly extends the expressive power of labelling functions and makes it possible to define complex hierarchies between categoriesfor instance, COMPANY may be a sub-category of ORG, which may be itself a sub-category of ENT. It also enables the expression of "negative" signals that indicate that the output should not be a particular label. Based on our experience applying weak supervision to various NLP tasks, we expect this ability to underspecify output labels to be very useful in NLP applications. | 0 |
Speech recognition researchers understand what sort of speech is easily recognized by speech recognizers and realize that speech recognizers perform best when dealing with clean speech. On the other hand, most end users of speech recognizers judge the effectiveness of speech recognition from their limited experiences and do not necessarily understand how useful state-of-the-art recognizers can be. Users sometimes do not adequately comprehend what sort of voices or recording conditions make recognition difficult. If they have previously had difficulty being understood by speech recognizers, they often doubt the usefulness of speech recognition and may stop using it.The first aim of this study is to address this problem by promoting the popularization and use of speech recognition by raising end user awareness of state-of-the-art speech recognition performance. For this purpose, we launched a podcast search web service called PodCastle (Goto et al., 2007; Ogata et al., 2007; Ogata and Goto, 2009b; Ogata and Goto, 2009a) in 2006 that allows anonymous users to search and read podcasts, and to share the full text of speech recognition results for podcasts. Podcasts are audio programs distributed on the web, like radio shows or audio blogs. They are becoming increasingly popular because updated podcasts (MP3 audio files) can be easily and frequently downloaded by using RSS syndication feeds. Since various contents have already been published as podcasts, users can grasp the current state of speech recognition technology just by seeing the results of speech recognition applied to published podcasts. This is important because when some users experience recognition errors while speaking into a microphone, they may become uncomfortable or frustrated and lose their motivation. Such problems do not occur for PodCastle because users do not have to provide their own speech input at all.However, even state-of-the-art speech recognizers cannot correctly transcribe all podcasts, because their contents and recording environments vary very widely. A typical approach to deal with speech contents that cannot be properly recognized is to create a speech corpus including such contents and prepare correct transcriptions to train speech recognizers. This approach, however, is impractical for PodCastle because advance preparation of a corpus covering diverse podcast contents will be too costly and time consuming.The second aim of this study is to dispense with the idea of using a pre-prepared corpus to address this problem, and instead employ the efforts of a large number of users to improve speech recognition and full-text search performance. Even if a state-of-the-art speech recognizer is used to recognize podcasts on the web, a number of errors will naturally occur. PodCastle therefore encourages users to cooperate by correcting these errors so that those podcasts can be searched more reliably. Furthermore, using the resulting corrections to train the speech recognizer, it implements a mechanism whereby the speech recognition performance is gradually improved. This approach can be described as collaborative training for speech recognition.In 2006, we coined the term Speech Recognition Research 2.0 (Goto et al., 2007) to refer to the research approach where the current state of speech recognition technology is intentionally disclosed to users so that speech recognition performance can be improved through cooperative participation by users. This term was chosen to reflect the concept of Web 2.0 (O'Reilly), since this approach brings the benefits of Web 2.0 to speech recognition research. In Section 2 of this paper, we discuss the research approach that Speech Recognition Research 2.0 represents, and in Section 3 we describe the PodCastle web service as an instance of this approach. In Section 4, we summarize the contributions of this research. | 0 |
Phonological transformations map underlying representations (UR) onto surface forms (SF) . The maps between UR and SF are known to be REGULAR (Johnson, 1972; Kaplan and Kay, 1994) , meaning they can be modeled with finite state transducers (FST). This generalization is stated as the Regular Hypothesis (1).(1) Regular Hypothesis: Phonological transformations are regular.The Regular Hypothesis is not strong enough. There are many regular maps that are phonologically implausible, and most UR →SF maps belong to the SUBREGULAR classes shown in Figure 1 . The majority are in the SUBSEQUENTIAL classes in gray. Bidirectional long-distance processes like stem-controlled vowel harmony belong to the more powerful WEAKLY DETERMINISTIC class (Heinz and Lai, 2013) . Only two tonal processes, unbounded tonal plateauing and conditional rightward spreading (in bold), have been shown to not belong to any subregular class (Jardine, 2016a) .Because of their wide empirical coverage and computational properties, the union of the subsequential classes was an early candidate for a tighter bound on the complexity of phonological processes than the regular class (Chandlee and Heinz, 2012; Gainor et al., 2012) . Heinz (forthcoming) states this as the Subsequential Hypothesis (2). The Subsequential Hypothesis is stronger than the Regular Hypothesis, while maintaining its uniform generalization over all phonological transformations.(2) Subsequential Hypothesis: Phonological transformations are left-or right-subsequential.In light of the weakly deterministic and regular processes, the Subsequential Hypothesis is too strong. Because there are phonological transformations that are not subregular, there is not a uniform revision of the Subsequential Hypothesis stronger than the Regular Hypothesis. Jardine (2016a) argues that only tonal processes exceed the weakly deterministic class, so a possible revision states that segmental processes are weakly deterministic and tonal processes are regular. 1 In short, from examining the UR →SF maps on their own, there is no subregular class that subsumes all phonological transformations.This paper argues that a uniform revision of the Subsequential Hypothesis obtains by examining not only the UR →SF maps, but also how their derivations are computed. There is an open question in dissimilation (Payne, 2014) consonant harmony (Luo, 2017) bounded copying (Chandlee and Heinz, 2012) local processes (Chandlee, 2014) vowel harmony (Heinz and Lai, 2013) stem-controlled vowel harmony (Heinz and Lai, 2013) bidirectional harmony unbounded circumambient processes (Jardine, 2016a) unbounded tonal plateauing (Jardine, 2016a) conditional rightward spreading (Jardine, 2016a) non-regular regular weakly deterministic leftsubsequential phonological theory whether UR →SF maps are derived in one fell swoop, or whether they are broken down into sub-derivations. For example, consider the sibilant harmony process that transforms the UR /sasasaS/ into the SF [SaSaSaS] in Figure 2 . The dashed line directly from the UR to the SF shows the PARALLEL derivation, where every /s/ changes at the same time. The solid lines from UR to SF via two intermediate forms show the SERIAL derivation, where only one /s/ changes at a time. Each line represents one computation made by the phonology. Both derivations yield the same SF, the parallel derivation in one step and the serial in three. In a parallel derivation, the SF is derived directly from the UR, so the derivation is exactly the UR →SF map. Because they are identical, parallel derivations have the same computational complexity as UR →SF maps. This paper argues that in a serial derivation, where the SF is derived gradually over a number of steps, each step is subsequential. This is stated as the Serial Subsequential Hypothesis (3). Restricting each step to making a single change requires iterating processes. The solid lines in Figure 2 , represent a process that changes one /s/, which applies three times to gradually yield the SF. As Section 4 argues, this restriction also predicts that some regular maps are not possible phonological processes./sasasaS/ sasaSaS saSaSaS [SaSaSaS](3) Serial Subsequential Hypothesis: Phonological transformations are decomposable into iterated left-or right-subsequential maps.The paper is organized as follows. Section 2 reviews the characterization of the classes in Figure 1 in terms of FSTs, providing empirical examples, and discusses the serial counterparts of the subregular classes. Section 3 demonstrates that the regular tonal processes can be broken down into subsequential steps in a serial derivation. Sections 4 and 5 discuss the predictions of the Serial Subsequential Hypothesis and conclude. | 0 |
Knowledge about habitats of bacteria is crucial for the study of microbial communities, e.g. metagenomics, as well as for various applications such as food processing and health sciences. Although this type of information is available in the biomedical literature, comprehensive resources accumulating the knowledge do not exist (Deléger et al., 2016) .The BioNLP Bacteria Biotope (BB) Shared Tasks are organized to provide a common evaluation platform for language technology researchers interested in developing information extraction methods adapted for the detection of bacteria and their physical locations mentioned in the literature. So far three BB shared tasks have been organized, the latest in 2016 (BB3) consisting of three main * These authors contributed equally. subtasks: named entity recognition and categorization (BB3-cat and BB3-cat+ner), event extraction (BB3-event and BB3-event+ner) and knowledge base extraction. The NER task includes three relevant entity types: HABITAT, BACTERIA and GEOGRAPHICAL, the categorization task focuses on normalizing the mentions to established ontology concepts, although GEOGRAPHICAL entities are excluded from this task, whereas the event extraction aims at finding the relations between these entities, i.e. extracting in which locations certain bacteria live in. The knowledge base extraction task is centered upon aggregating this type of information from a large text corpus.In this paper we revisit the BB3 subtasks of NER, categorization and event extraction, all of which are essential for building a real-world information extraction pipeline. As a result, we present a text mining pipeline which achieves state-of-theart results for the joint evaluation of NER and event extraction as well as for the categorization task using the official BB3 shared task datasets and evaluation tools. Building such end-to-end system is important for bringing the results from the shared tasks to the actual intended users. To our best knowledge, no such system is openly available for bacteria habitat extraction.The pipeline utilizes deep neural networks, conditional random field classifiers and vector space models to solve the various subtasks and the code is freely available at https://github.com/ TurkuNLP/BHE. In the following sections we discuss our system, divided into three modules: entity recognition, categorization and event extraction. We then analyze the results and finally discuss the potential future research directions. | 0 |
We compute the semantic similarity between pairs of sentences by combining a set of similarity metrics at various levels of depth, from surface word similarity to similarities derived from vector models of word or sentence meaning. Regression is then used to determine optimal weightings of the different similarity measures. We use this setting to assess the contributions from several different word embeddings.Our system is based on similarities computed using multiple sets of features: (a) naive lexical features, (b) similarity between vector representations of sentences, and (c) similarity between constituent words computed using WordNet, using the eigenword vector representations of words , and using selectors, which generalize words to a set of words that appear in the same context. | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.