ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
candido-etal-2009-supporting
https://aclanthology.org/W09-2105
Supporting the Adaptation of Texts for Poor Literacy Readers: a Text Simplification Editor for Brazilian Portuguese
In this paper we investigate the task of text simplification for Brazilian Portuguese. Our purpose is threefold: to introduce a simplification tool for such language and its underlying development methodology, to present an on-line authoring system of simplified text based on the previous tool, and finally to discuss the potentialities of such technology for education. The resources and tools we present are new for Portuguese and innovative in many aspects with respect to previous initiatives for other languages.
true
[]
[]
Reduced Inequalities
Quality Education
null
We thank the Brazilian Science Foundation FAPESP and Microsoft Research for financial support.
2009
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
obamuyide-vlachos-2019-model
https://aclanthology.org/P19-1589
Model-Agnostic Meta-Learning for Relation Classification with Limited Supervision
In this paper we frame the task of supervised relation classification as an instance of metalearning. We propose a model-agnostic metalearning protocol for training relation classifiers to achieve enhanced predictive performance in limited supervision settings. During training, we aim to not only learn good parameters for classifying relations with sufficient supervision, but also learn model parameters that can be fine-tuned to enhance predictive performance for relations with limited supervision. In experiments conducted on two relation classification datasets, we demonstrate that the proposed meta-learning approach improves the predictive performance of two state-of-the-art supervised relation classification models.
false
[]
[]
null
null
null
The authors acknowledge support from the EU H2020 SUMMA project (grant agreement number 688139). We are grateful to Yuhao Zhang for sharing his data with us.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lehman-etal-2019-inferring
https://aclanthology.org/N19-1371
Inferring Which Medical Treatments Work from Reports of Clinical Trials
How do we know if a particular medical treatment actually works? Ideally one would consult all available evidence from relevant clinical trials. Unfortunately, such results are primarily disseminated in natural language scientific articles, imposing substantial burden on those trying to make sense of them. In this paper, we present a new task and corpus for making this unstructured evidence actionable. The task entails inferring reported findings from a full-text article describing a randomized controlled trial (RCT) with respect to a given intervention, comparator, and outcome of interest, e.g., inferring if an article provides evidence supporting the use of aspirin to reduce risk of stroke, as compared to placebo. We present a new corpus for this task comprising 10,000+ prompts coupled with fulltext articles describing RCTs. Results using a suite of models-ranging from heuristic (rule-based) approaches to attentive neural architectures-demonstrate the difficulty of the task, which we believe largely owes to the lengthy, technical input texts. To facilitate further work on this important, challenging problem we make the corpus, documentation, a website and leaderboard, and code for baselines and evaluation available at http:
true
[]
[]
Good Health and Well-Being
null
null
This work was supported by NSF CAREER Award 1750978.We also acknowledge ITS at Northeastern for providing high performance computing resources that have supported this research.
2019
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
knorz-1982-recognition
https://aclanthology.org/C82-1026
Recognition of Abstract Objects - A Decision Theory Approach Within Natural Language Processing
The DAISY/ALIBABA-system developed within the WAl-project represents both a specific solution to the automatic indexing problem and a general framework for problems in the field of natural language processing, characterized by fuzziness and uncertainty• The WAI approach to the indexing problem has already been published [3], [5]. This paper however presents the underlying paradigm of recognizing abstract objects. The basic concepts are described, including the decision theory approach used for recognition.
false
[]
[]
null
null
null
Abstracts (FSTA 71/72) containing about 33.000 documents were used as a basis for dictionary.construction.
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
becker-1975-phrasal
https://aclanthology.org/T75-2013
The Phrasal Lexicon
to understand the workings of these systems without vainly pretending that they can be reduced to pristine-pure mathematical formulations. I I I t I 1 t ! 1 | ! I ! | | ! ! Let's Face Facts
false
[]
[]
null
null
null
null
1975
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
loveys-etal-2017-small
https://aclanthology.org/W17-3110
Small but Mighty: Affective Micropatterns for Quantifying Mental Health from Social Media Language
Many psychological phenomena occur in small time windows, measured in minutes or hours. However, most computational linguistic techniques look at data on the order of weeks, months, or years. We explore micropatterns in sequences of messages occurring over a short time window for their prevalence and power for quantifying psychological phenomena, specifically, patterns in affect. We examine affective micropatterns in social media posts from users with anxiety, eating disorders, panic attacks, schizophrenia, suicidality, and matched controls.
true
[]
[]
Good Health and Well-Being
null
null
The authors would like ackowledge the support of the 2016 Jelinek Memorial Workshop on Speech and Language Technology, at Johns Hopkins University, for providing the concerted time to perform this research. The authors would like to especially thank Craig and Annabelle Bryan for the inspiration for this work and the generosity with which they shared their time to mutually explore results. Finally and more importantly, the authors would like to thank the people who donated their data at OurDataHelps.org to support this and other research endeavors at the intersection of data science and mental health.
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
avvaru-vobilisetty-2020-bert
https://aclanthology.org/2020.semeval-1.144
BERT at SemEval-2020 Task 8: Using BERT to Analyse Meme Emotions
Sentiment analysis, being one of the most sought after research problems within Natural Language Processing (NLP) researchers. The range of problems being addressed by sentiment analysis is ever increasing. Till now, most of the research focuses on predicting sentiment, or sentiment categories like sarcasm, humor, offense and motivation on text data. But, there is very limited research that is focusing on predicting or analyzing the sentiment of internet memes. We try to address this problem as part of "Task 8 of SemEval 2020: Memotion Analysis" (Sharma et al., 2020). We have participated in all the three tasks of Memotion Analysis. Our system built using state-of-the-art pre-trained Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) performed better compared to baseline models for the two tasks A and C and performed close to the baseline model for task B. In this paper, we present the data used for training, data cleaning and preparation steps, the fine-tuning process of BERT based model and finally predict the sentiment or sentiment categories. We found that the sequence models like Long Short Term Memory(LSTM) (Hochreiter and Schmidhuber, 1997) and its variants performed below par in predicting the sentiments. We also performed a comparative analysis with other Transformer based models like DistilBERT (Sanh et al., 2019) and XLNet (Yang et al., 2019).
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
adam-etal-2017-zikahack
https://aclanthology.org/W17-5806
ZikaHack 2016: A digital disease detection competition
Effective response to infectious diseases outbreaks relies on the rapid and early detection of those outbreaks. Invalidated, yet timely and openly available digital information can be used for the early detection of outbreaks. Public health surveillance authorities can exploit these early warnings to plan and coordinate rapid surveillance and emergency response programs. In 2016, a digital disease detection competition named ZikaHack was launched. The objective of the competition was for multidisciplinary teams to design, develop and demonstrate innovative digital disease detection solutions to retrospectively detect the 2015-16 Brazilian Zika virus outbreak earlier than traditional surveillance methods. In this paper, an overview of the Zik-aHack competition is provided. The challenges and lessons learned in organizing this competition are also discussed for use by other researchers interested in organizing similar competitions.
true
[]
[]
Good Health and Well-Being
null
null
Funding for this competition was provided by the National health and medical research council's
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martinez-etal-2002-syntactic
https://aclanthology.org/C02-1112
Syntactic Features for High Precision Word Sense Disambiguation
This paper explores the contribution of a broad range of syntactic features to WSD: grammatical relations coded as the presence of adjuncts/arguments in isolation or as subcategorization frames, and instantiated grammatical relations between words. We have tested the performance of syntactic features using two different ML algorithms (Decision Lists and AdaBoost) on the Senseval-2 data. Adding syntactic features to a basic set of traditional features improves performance, especially for AdaBoost. In addition, several methods to build arbitrarily high accuracy WSD systems are also tried, showing that syntactic features allow for a precision of 86% and a coverage of 26% or 95% precision and 8% coverage.
false
[]
[]
null
null
null
This research has been partially funded by McyT (Hermes project TIC-2000-0335-C03-03). David Martinez was funded by the Basque Government, grant AE-BFI:01.245).
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
martinez-alonso-etal-2013-annotation
https://aclanthology.org/P13-2127
Annotation of regular polysemy and underspecification
We present the result of an annotation task on regular polysemy for a series of semantic classes or dot types in English, Danish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods: majority voting with a theory-compliant backoff strategy, and MACE, an unsupervised system to choose the most likely sense from all the annotations.
false
[]
[]
null
null
null
The research leading to these results has been funded by the European Commission's 7th Framework Program under grant agreement 238405 (CLARA).
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
agrawal-carpuat-2020-generating
https://aclanthology.org/2020.ngt-1.21
Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task
This paper describes the University of Maryland's submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2 nd and 4 th on the leaderboard.
true
[]
[]
Quality Education
null
null
null
2020
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2008-chinese
https://aclanthology.org/C08-1130
Chinese Term Extraction Using Minimal Resources
This pap term extraction nimal resources. A term candidate extraction algorithm is proposed to i tures of the 1 Ter st fun domain. Term
false
[]
[]
null
null
null
This work was done while the first author was working at the Hong Kong Polytechnic University supported by CERG Grant B-Q941 and Central Research Grant: G-U297.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
goldberg-etal-2013-efficient
https://aclanthology.org/P13-2111
Efficient Implementation of Beam-Search Incremental Parsers
Beam search incremental parsers are accurate, but not as fast as they could be. We demonstrate that, contrary to popular belief, most current implementations of beam parsers in fact run in O(n 2), rather than linear time, because each statetransition is actually implemented as an O(n) operation. We present an improved implementation, based on Tree Structured Stack (TSS), in which a transition is performed in O(1), resulting in a real lineartime algorithm, which is verified empirically. We further improve parsing speed by sharing feature-extraction and dotproduct across beam items. Practically, our methods combined offer a speedup of ∼2x over strong baselines on Penn Treebank sentences, and are orders of magnitude faster on much longer sentences.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nissim-etal-2013-cross
https://aclanthology.org/W13-0501
Cross-linguistic annotation of modality: a data-driven hierarchical model
We present an annotation model of modality which is (i) cross-linguistic, relying on a wide, strongly typologically motivated approach, and (ii) hierarchical and layered, accounting for both factuality and speaker's attitude, while modelling these two aspects through separate annotation schemes. Modality is defined through cross-linguistic categories, but the classification of actual linguistic expressions is language-specific. This makes our annotation model a powerful tool for investigating linguistic diversity in the field of modality on the basis of real language data, being thus also useful from the perspective of machine translation systems.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jauregi-unanue-etal-2020-leveraging
https://aclanthology.org/2020.coling-main.395
Leveraging Discourse Rewards for Document-Level Neural Machine Translation
Document-level machine translation focuses on the translation of entire documents from a source to a target language. It is widely regarded as a challenging task since the translation of the individual sentences in the document needs to retain aspects of the discourse at document level. However, document-level translation models are usually not trained to explicitly ensure discourse quality. Therefore, in this paper we propose a training approach that explicitly optimizes two established discourse metrics, lexical cohesion (LC) and coherence (COH), by using a reinforcement learning objective. Experiments over four different language pairs and three translation domains have shown that our training approach has been able to achieve more cohesive and coherent document translations than other competitive approaches, yet without compromising the faithfulness to the reference translation. In the case of the Zh-En language pair, our method has achieved an improvement of 2.46 percentage points (pp) in LC and 1.17 pp in COH over the runner-up, while at the same time improving 0.63 pp in BLEU score and 0.47 pp in F BERT .
false
[]
[]
null
null
null
The authors would like to thank the RoZetta Institute (formerly CMCRC) for providing financial support to this research. Warmest thanks also go to Dr. Sameen Maruf for her feedback on an early version of this paper.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
alonso-etal-2000-redefinition
https://aclanthology.org/W00-2002
A redefinition of Embedded Push-Down Automata
A new definition of Embedded Push-Down Automata is provided. We prove this new definition. •. resen1es the equivalence with tree adjoining languages and we provide a tabulationframework to execute any automaton in polynomial time with respect to the length of the input string.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lafourcade-1996-structured
https://aclanthology.org/C96-2199
Structured lexical data: how to make them widely available, useful and reasonably protected? A practicalexample with a trilingual dictionary
We are studying under which constraints structured lexical data can bemade, at the same time, widely available to the general public (freely ornot), electronically supported, published and reasonably protected frompiracy? A three facet approach-with dictionary tools, web servers and e-mail servers-seems to be effective. We illustrate our views with Alex, a genericdictionary tool, which is used with a French-English-Malay dictionary. Thevery distinction between output, logical and coding formats is made. Storage is based onthe latter and output formats are dynamically generated on the fly atrequest times-making the tool usable in many configurations. Keeping the data structuredis necessary to make them usable also by automated processes and to allowdynamic filtering.
false
[]
[]
null
null
null
My gratefulness goes to the staff of theUTMK and USM, the Dewan Bahasa dan Pustaka and the French Embassy at KualaLumpur. I do not forget the staff of the GETA-CLIPS-IMAG laboratory forsupporting this project and the reviewers of this paper, namely H. Blanchon,Ch. Boitet, J. Gaschler and G. Sdrasset. Of course, all errors remainmine.
1996
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
grenager-etal-2005-unsupervised
https://aclanthology.org/P05-1046
Unsupervised Learning of Field Segmentation Models for Information Extraction
The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain field structured extraction tasks, such as classified advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for field structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains. However, one can dramatically improve the quality of the learned structure by exploiting simple prior knowledge of the desired solutions. In both domains, we found that unsupervised methods can attain accuracies with 400 unlabeled examples comparable to those attained by supervised methods on 50 labeled examples, and that semi-supervised methods can make good use of small amounts of labeled data.
false
[]
[]
null
null
null
We would like to thank the reviewers for their consideration and insightful comments.
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mencarini-2018-potential
https://aclanthology.org/W18-1109
The Potential of the Computational Linguistic Analysis of Social Media for Population Studies
The paper provides an outline of the scope for synergy between computational linguistic analysis and population studies. It first reviews where population studies stand in terms of using social media data. Demographers are entering the realm of big data in force. But, this paper argues, population studies have much to gain from computational linguistic analysis, especially in terms of explaining the drivers behind population processes. The paper gives two examples of how the method can be applied, and concludes with a fundamental caveat. Yes, computational linguistic analysis provides a possible key for integrating micro theory into any demographic analysis of social media data. But results may be of little value in as much as knowledge about fundamental sample characteristics are unknown.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
liu-etal-2017-generating
https://aclanthology.org/P17-1010
Generating and Exploiting Large-scale Pseudo Training Data for Zero Pronoun Resolution
Most existing approaches for zero pronoun resolution are heavily relying on annotated data, which is often released by shared task organizers. Therefore, the lack of annotated data becomes a major obstacle in the progress of zero pronoun resolution task. Also, it is expensive to spend manpower on labeling the data for better performance. To alleviate the problem above, in this paper, we propose a simple but novel approach to automatically generate large-scale pseudo training data for zero pronoun resolution. Furthermore, we successfully transfer the cloze-style reading comprehension neural network model into zero pronoun resolution task and propose a two-step training mechanism to overcome the gap between the pseudo training data and the real one. Experimental results show that the proposed approach significantly outperforms the state-of-the-art systems with an absolute improvements of 3.1% F-score on OntoNotes 5.0 data.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their thorough reviewing and proposing thoughtful comments to improve our paper. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015407, Key Projects of National Natural Science Foundation of China via grant 61632011,
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuhn-etal-2006-segment
https://aclanthology.org/N06-1004
Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation
This paper presents a new approach to distortion (phrase reordering) in phrasebased machine translation (MT). Distortion is modeled as a sequence of choices during translation. The approach yields trainable, probabilistic distortion models that are global: they assign a probability to each possible phrase reordering. These "segment choice" models (SCMs) can be trained on "segment-aligned" sentence pairs; they can be applied during decoding or rescoring. The approach yields a metric called "distortion perplexity" ("disperp") for comparing SCMs offline on test data, analogous to perplexity for language models. A decision-tree-based SCM is tested on Chinese-to-English translation, and outperforms a baseline distortion penalty approach at the 99% confidence level.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gnehm-clematide-2020-text
https://aclanthology.org/2020.nlpcss-1.10
Text Zoning and Classification for Job Advertisements in German, French and English
We present experiments to structure job ads into text zones and classify them into professions, industries and management functions, thereby facilitating social science analyses on labor marked demand. Our main contribution are empirical findings on the benefits of contextualized embeddings and the potential of multi-task models for this purpose. With contextualized in-domain embeddings in BiLSTM-CRF models, we reach an accuracy of 91% for token-level text zoning and outperform previous approaches. A multi-tasking BERT model performs well for our classification tasks. We further compare transfer approaches for our multilingual data.
true
[]
[]
Decent Work and Economic Growth
null
null
We thank Dong Nguyen and the anonymous reviewers for their careful reading of this article and their helpful comments and suggestions, and Helen Buchs for her efforts in post-evaluation. This work is supported by the Swiss National Science Foundation under grant number 407740 187333.
2020
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
yu-etal-2018-device
https://aclanthology.org/C18-2028
On-Device Neural Language Model Based Word Prediction
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation. However, deploying huge language models on mobile devices for on-device keyboards poses computation as a bottleneck due to their puny computation capacities. In this work, we propose an on-device neural language model based word prediction method that optimizes run-time memory and also provides a realtime prediction environment. Our model size is 7.40MB and has average prediction time of 6.47 ms. The proposed model outperforms existing methods for word prediction in terms of keystroke savings and word prediction rate and has been successfully commercialized.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2013-identifying
https://aclanthology.org/N13-1124
Identifying Intention Posts in Discussion Forums
This paper proposes to study the problem of identifying intention posts in online discussion forums. For example, in a discussion forum, a user wrote "I plan to buy a camera," which indicates a buying intention. This intention can be easily exploited by advertisers. To the best of our knowledge, there is still no reported study of this problem. Our research found that this problem is particularly suited to transfer learning because in different domains, people express the same intention in similar ways. We then propose a new transfer learning method which, unlike a general transfer learning algorithm, exploits several special characteristics of the problem. Experimental results show that the proposed method outperforms several strong baselines, including supervised learning in the target domain and a recent transfer learning method.
false
[]
[]
null
null
null
This work was supported in part by a grant from National Science Foundation (NSF) under grant no. IIS-1111092, and a grant from HP Labs Innovation Research Program.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nguyen-verspoor-2018-improved
https://aclanthology.org/K18-2008
An Improved Neural Network Model for Joint POS Tagging and Dependency Parsing
We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 "big" Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications.
false
[]
[]
null
null
null
This work was supported by the ARC Discovery Project DP150101550 and ARC Linkage Project LP160101469.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsu-2014-frequency
https://aclanthology.org/W14-4714
When Frequency Data Meet Dispersion Data in the Extraction of Multi-word Units from a Corpus: A Study of Trigrams in Chinese
One of the main approaches to extract multi-word units is the frequency threshold approach, but the way this approach considers dispersion data still leaves a lot to be desired. This study adopts Gries's (2008) dispersion measure to extract trigrams from a Chinese corpus, and the results are compared with those of the frequency threshold approach. It is found that the overlap between the two approaches is not very large. This demonstrates the necessity of taking dispersion data more seriously and the dynamic nature of lexical representations. Moreover, the trigrams extracted in the present study can be used in a wide range of language resources in Chinese.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vijay-shanker-etal-1987-characterizing
https://aclanthology.org/P87-1015
Characterizing Structural Descriptions Produced by Various Grammatical Formalisms
We consider the structural descriptions produced by various grammatical formalisms in ~ of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationship between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their deriva:ion trees. We find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free C-ramma~. On the basis of this observation, we describe a class of formalisms which we call Linear Context-Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.
false
[]
[]
null
null
null
null
1987
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bos-etal-2003-dipper
https://aclanthology.org/W03-2123
DIPPER: Description and Formalisation of an Information-State Update Dialogue System Architecture
The DIPPER architecture is a collection of software agents for prototyping spoken dialogue systems. Implemented on top of the Open Agent Architecture (OAA), it comprises agents for speech input and output, dialogue management, and further supporting agents. We define a formal syntax and semantics for the DIP-PER information state update language. The language is independent of particular programming languages, and incorporates procedural attachments for access to external resources using OAA. Definition: Update. An ordered set of effects e 1 ,. .. , e n are successfully applied to an information state s, resulting an information state s if U(
false
[]
[]
null
null
null
Part of this work was supported by the EU Project MagiCster (IST 1999-29078). We thank Nuance for permission to use their software and tools.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fort-etal-2012-modeling
https://aclanthology.org/C12-1055
Modeling the Complexity of Manual Annotation Tasks: a Grid of Analysis
Manual corpus annotation is getting widely used in Natural Language Processing (NLP). While being recognized as a difficult task, no in-depth analysis of its complexity has been performed yet. We provide in this article a grid of analysis of the different complexity dimensions of an annotation task, which helps estimating beforehand the difficulties and cost of annotation campaigns. We observe the applicability of this grid on existing annotation campaigns and detail its application on a real-world example.
false
[]
[]
null
null
null
This work was realized as part of the Quaero Programme 12 , funded by OSEO, French State agency for innovation.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wilson-etal-2009-articles
https://aclanthology.org/J09-3003
Articles: Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis
Many approaches to automatic sentiment analysis begin with a large lexicon of words marked with their prior polarity (also called semantic orientation). However, the contextual polarity of the phrase in which a particular instance of a word appears may be quite different from the word's prior polarity. Positive words are used in phrases expressing negative sentiments, or vice versa. Also, quite often words that are positive or negative out of context are neutral in context, meaning they are not even being used to express a sentiment. The goal of this work is to automatically distinguish between prior and contextual polarity, with a focus on understanding which features are important for this task. Because an important aspect of the problem is identifying when polar terms are being used in neutral contexts, features for distinguishing between neutral and polar instances are evaluated, as well as features for distinguishing between positive and negative contextual polarity. The evaluation includes assessing the performance of features across multiple machine learning algorithms. For all learning algorithms except one, the combination of all features together gives the best performance. Another facet of the evaluation considers how the presence of neutral instances affects the performance of features for distinguishing between positive and negative polarity. These experiments show that the presence of neutral instances greatly degrades the performance of these features, and that perhaps the best way to improve performance across all polarity classes is to improve the system's ability to identify when an instance is neutral.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their valuable comments and suggestions. This work was supported in part by an Andrew Mellow Predoctoral Fellowship, by the NSF under grant IIS-0208798, by the Advanced Research and Development Activity (ARDA), and by the European IST Programme through the AMIDA Integrated Project FP6-0033812.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
konig-1990-complexity
https://aclanthology.org/C90-2041
The Complexity of Parsing With Extended Categorial Grammars
Instead of incorporating a gap-percolation mechanism for handling certain "movement" phenomena, the extended categorial grammars contain special inference rules for treating these problems. The Lambek categorial grammar is one representative of the grammar family under consideration. It allows for a restricted use of hypothetical reasoning. We define a modification of the Cocke-Younger-Kasami (CKY) parsing algorithm which covers this additional deductive power and analyze its time complexity.
false
[]
[]
null
null
null
The research reported in this paper is supported by the LILOG project, and a doctoral fellowship, both from IBM Deutschland OmbH, and by the Esprit Basic Research Action Project 3175 (DYANA). I thank Andreas Eisele for discussion. The responsibility for errors resides with me.
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sharma-etal-2021-lrg
https://aclanthology.org/2021.semeval-1.21
LRG at SemEval-2021 Task 4: Improving Reading Comprehension with Abstract Words using Augmentation, Linguistic Features and Voting
In this article, we present our methodologies for SemEval-2021 Task-4: Reading Comprehension of Abstract Meaning. Given a fill-inthe-blank-type question and a corresponding context, the task is to predict the most suitable word from a list of 5 options. There are three sub-tasks within this task: Imperceptibility (subtask-I), Non-Specificity (subtask-II), and Intersection (subtask-III). We use encoders of transformers-based models pre-trained on the masked language modelling (MLM) task to build our Fill-in-the-blank (FitB) models. Moreover, to model imperceptibility, we define certain linguistic features, and to model non-specificity, we leverage information from hypernyms and hyponyms provided by a lexical database. Specifically, for non-specificity, we try out augmentation techniques, and other statistical techniques. We also propose variants, namely Chunk Voting and Max Context, to take care of input length restrictions for BERT, etc. Additionally, we perform a thorough ablation study, and use Integrated Gradients to explain our predictions on a few samples. Our best submissions achieve accuracies of 75.31% and 77.84%, on the test sets for subtask-I and subtask-II, respectively. For subtask-III, we achieve accuracies of 65.64% and 62.27%. The code is available here.
false
[]
[]
null
null
null
We thank Rajaswa Patil 1 and Somesh Singh 2 for their support. We would also like to express our gratitude to our colleagues at the Language Research Group (LRG) 3 , who have been with us at every stepping stone.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liberman-2009-annotation
https://aclanthology.org/W09-0102
The Annotation Conundrum
Without lengthy, iterative refinement of guidelines, and equally lengthy and iterative training of annotators, the level of inter-subjective agreement on simple tasks of phonetic, phonological, syntactic, semantic, and pragmatic annotation is shockingly low. This is a significant practical problem in speech and language technology, but it poses questions of interest to psychologists, philosophers of language, and theoretical linguists as well.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
indurthi-etal-2019-fermi
https://aclanthology.org/S19-2009
FERMI at SemEval-2019 Task 5: Using Sentence embeddings to Identify Hate Speech Against Immigrants and Women in Twitter
This paper describes our system (Fermi) for Task 5 of SemEval-2019: HatEval: Multilingual Detection of Hate Speech Against Immigrants and Women on Twitter. We participated in the subtask A for English and ranked first in the evaluation on the test set. We evaluate the quality of multiple sentence embeddings and explore multiple training models to evaluate the performance of simple yet effective embedding-ML combination algorithms. Our team-Fermi's model achieved an accuracy of 65.00% for English language in task A. Our models, which use pretrained Universal Encoder sentence embeddings for transforming the input and SVM (with RBF kernel) for classification, scored first position (among 68) in the leaderboard on the test set for Subtask A in English language. In this paper we provide a detailed description of the approach, as well as the results obtained in the task.
true
[]
[]
Peace, Justice and Strong Institutions
Reduced Inequalities
Gender Equality
null
2019
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
false
lewin-2007-basenps
https://aclanthology.org/W07-1022
BaseNPs that contain gene names: domain specificity and genericity
The names of named entities very often occur as constituents of larger noun phrases which denote different types of entity. Understanding the structure of the embedding phrase can be an enormously beneficial first step to enhancing whatever processing is intended to follow the named entity recognition in the first place. In this paper, we examine the integration of general purpose linguistic processors together with domain specific named entity recognition in order to carry out the task of baseNP detection. We report a best F-score of 87.17% on this task. We also report an inter-annotator agreement score of 98.8 Kappa on the task of baseNP annotation of a new data set.
true
[]
[]
Good Health and Well-Being
null
null
null
2007
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cieri-etal-2012-twenty
http://www.lrec-conf.org/proceedings/lrec2012/pdf/1117_Paper.pdf
Twenty Years of Language Resource Development and Distribution: A Progress Report on LDC Activities
On the Linguistic Data Consortium's (LDC) 20th anniversary, this paper describes the changes to the language resource landscape over the past two decades, how LDC has adjusted its practice to adapt to them and how the business model continues to grow. Specifically, we will discuss LDC's evolving roles and changes in the sizes and types of LDC language resources (LR) as well as the data they include and the annotations of that data. We will also discuss adaptations of the LDC business model and the sponsored projects it supports.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-haug-2010-porting
http://www.lrec-conf.org/proceedings/lrec2010/pdf/631_Paper.pdf
Porting an Ancient Greek and Latin Treebank
We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods.
false
[]
[]
null
null
null
We thank the Perseus Project, Tufts University, for providing the Greek test set. The first author gratefully acknowledges the support of the Faculty of Humanities at the University of Oslo, where he conducted part of this research.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shelmanov-etal-2021-certain
https://aclanthology.org/2021.eacl-main.157
How Certain is Your Transformer?
In this work, we consider the problem of uncertainty estimation for Transformer-based models. We investigate the applicability of uncertainty estimates based on dropout usage at the inference stage (Monte Carlo dropout). The series of experiments on natural language understanding tasks shows that the resulting uncertainty estimates improve the quality of detection of error-prone instances. Special attention is paid to the construction of computationally inexpensive estimates via Monte Carlo dropout and Determinantal Point Processes.
false
[]
[]
null
null
null
We thank the reviewers for their valuable feedback. The development of uncertainty estimation algorithms for Transformer models (Section 3) was supported by the joint MTS-Skoltech lab. The development of a software system for the experimental study of uncertainty estimation methods and its application to NLP tasks (Section 4) was supported by the Russian Science Foundation grant 20-71-10135. The Zhores supercomputer (Zacharov et al., 2019) was used for computations.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hernandez-calvo-2014-conll
https://aclanthology.org/W14-1707
CoNLL 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora
We describe our approach to grammatical error correction presented in the CoNLL Shared Task 2014. Our work is focused on error detection in sentences with a language model based on syntactic tri-grams and bi-grams extracted from dependency trees generated from 90% of the English Wikipedia. Also, we add a naïve module to error correction that outputs a set of possible answers, those sentences are scored using a syntactic n-gram language model. The sentence with the best score is the final suggestion of the system. The system was ranked 11th, evidently this is a very simple approach, but since the beginning our main goal was to test the syntactic n-gram language model with a big corpus to future comparison.
false
[]
[]
null
null
null
Work done under partial support of Mexican Government (CONACYT, SNI) and Instituto Politécnico Nacional, México (SIP-IPN, COFAA-IPN, PIFI-IPN).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ayari-etal-2010-fine
http://www.lrec-conf.org/proceedings/lrec2010/pdf/520_Paper.pdf
Fine-grained Linguistic Evaluation of Question Answering Systems
Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
swanson-charniak-2014-data
https://aclanthology.org/E14-4033
Data Driven Language Transfer Hypotheses
Language transfer, the preferential second language behavior caused by similarities to the speaker's native language, requires considerable expertise to be detected by humans alone. Our goal in this work is to replace expert intervention by data-driven methods wherever possible. We define a computational methodology that produces a concise list of lexicalized syntactic patterns that are controlled for redundancy and ranked by relevancy to language transfer. We demonstrate the ability of our methodology to detect hundreds of such candidate patterns from currently available data sources, and validate the quality of the proposed patterns through classification experiments.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ruder-plank-2017-learning
https://aclanthology.org/D17-1038
Learning to select data for transfer learning with Bayesian Optimization
Domain similarity measures can be used to gauge adaptability and select suitable data for transfer learning, but existing approaches define ad hoc measures that are deemed suitable for respective tasks. Inspired by work on curriculum learning, we propose to learn data selection measures using Bayesian Optimization and evaluate them across models, domains and tasks. Our learned measures outperform existing domain similarity measures significantly on three tasks: sentiment analysis, partof-speech tagging, and parsing. We show the importance of complementing similarity with diversity, and that learned measures are-to some degree-transferable across models, domains, and even tasks.
false
[]
[]
null
null
null
We thank the anonymous reviewers for their valuable feedback. Sebastian is supported by Irish Research Council Grant Number EBPPG/2014/30 and Science Foundation Ireland Grant Number SFI/12/RC/2289. Barbara is supported by NVIDIA corporation and the Computing Center of the University of Groningen.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ogbuju-onyesolu-2019-development
https://aclanthology.org/W19-3601
Development of a General Purpose Sentiment Lexicon for Igbo Language
null
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yates-etal-2016-effects
https://aclanthology.org/L16-1479
Effects of Sampling on Twitter Trend Detection
Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter's public feed, which includes only about 1% of all public tweets. It is unclear if, when, and how using Twitter's 1% feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets, and use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100% to a 1% sample. We find that using the public 1% sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.
true
[]
[]
Good Health and Well-Being
null
null
null
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hubert-etal-2016-training
https://aclanthology.org/L16-1514
Training \& Quality Assessment of an Optical Character Recognition Model for Northern Haida
In this paper, we are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character's frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. We hope that our work can encourage further OCR endeavors for other endangered and/or underresearched languages. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
segond-etal-2005-situational
https://aclanthology.org/W05-0213
Situational Language Training for Hotel Receptionists
This paper presents the lessons learned in experimenting with Thetis 1 , an EC project focusing on the creation and localization of enhanced on-line pedagogical content for language learning in tourism industry. It is based on a general innovative approach to language learning that allows employees to acquire practical oral and written skills while navigating a relevant professional scenario. The approach is enabled by an underlying platform (EXILLS) that integrates virtual reality with a set of linguistic, technologies to create a new form of dynamic, extensible, goal-directed e-content. 1 Credits The work described in this paper has been supported by the European Commission in the frame of the eContent program 2 .
true
[]
[]
Decent Work and Economic Growth
null
null
null
2005
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
tiedemann-2008-synchronizing
http://www.lrec-conf.org/proceedings/lrec2008/pdf/484_paper.pdf
Synchronizing Translated Movie Subtitles
This paper addresses the problem of synchronizing movie subtitles, which is necessary to improve alignment quality when building a parallel corpus out of translated subtitles. In particular, synchronization is done on the basis of aligned anchor points. Previous studies have shown that cognate filters are useful for the identification of such points. However, this restricts the approach to related languages with similar alphabets. Here, we propose a dictionary-based approach using automatic word alignment. We can show an improvement in alignment quality even for related languages compared to the cognate-based approach.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsiao-etal-2017-integrating
https://aclanthology.org/I17-1098
Integrating Subject, Type, and Property Identification for Simple Question Answering over Knowledge Base
This paper presents an approach to identify subject, type and property from knowledge base for answering simple questions. We propose new features to rank entity candidates in KB. Besides, we split a relation in KB into type and property. Each of them is modeled by a bi-directional LSTM. Experimental results show that our model achieves the state-of-the-art performance on the SimpleQuestions dataset. The hard questions in the experiments are also analyzed in detail.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kwong-2001-forming
https://aclanthology.org/Y01-1010
Forming an Integrated Lexical Resource for Word Sense Disambiguation
This paper reports a full-scale linkage of noun senses between two existing lexical resources, namely WordNet and Roget's Thesaurus, to form an Integrated Lexical Resource (ILR) for use in natural language processing (NLP). The linkage was founded on a structurally-based sense-mapping algorithm. About 18,000 nouns with over 30,000 senses were mapped. Although exhaustive verification is impractical, we show that it is reasonable to expect some 70-80% accuracy of the resultant mappings. More importantly, the ILR, which contains enriched lexical information, is readily usable in many NLP tasks. We shall explore some practical use of the ILR in word sense disambiguation (WSD), as WSD notably requires a wide range of lexical information.
false
[]
[]
null
null
null
This work was done at the Computer Laboratory, University of Cambridge. The author would like to thank Prof. Karen Sparck Jones for her advice and comments. The work was financially supported by the Committee of Vice-Chancellors and Principals of the Universities of the United Kingdom, the Cambridge Commonwealth Trust, Downing College, and the Croucher Foundation.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hirasawa-komachi-2019-debiasing
https://aclanthology.org/W19-6604
Debiasing Word Embeddings Improves Multimodal Machine Translation
In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs-English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.62 BLEU and +1.14 METEOR for English-German translation and +1.40 BLEU and +1.13 METEOR for English-French translation.
false
[]
[]
null
null
null
This work was partially supported by JSPS Grantin-Aid for Scientific Research (C) Grant Number JP19K12099.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
van-deemter-etal-2017-investigating
https://aclanthology.org/W17-3532
Investigating the content and form of referring expressions in Mandarin: introducing the Mtuna corpus
East Asian languages are thought to handle reference differently from English, particularly in terms of the marking of definiteness and number. We present the first Data-Text corpus for Referring Expressions in Mandarin, and we use this corpus to test some initial hypotheses inspired by the theoretical linguistics literature. Our findings suggest that function words deserve more attention in Referring Expression Generation than they have so far received, and they have a bearing on the debate about whether different languages make different trade-offs between clarity and brevity.
false
[]
[]
null
null
null
This work is partly supported by the National Natural Science Foundation of China, Grant no. 61433015. We thank Stephen Matthews, University of Hong Kong, for comments, and Albert Gatt, University of Malta, for access to Dutch TUNA.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ruan-etal-2016-finding
https://aclanthology.org/P16-2052
Finding Optimists and Pessimists on Twitter
Optimism is linked to various personality factors as well as both psychological and physical health, but how does it relate to the way a person tweets? We analyze the online activity of a set of Twitter users in order to determine how well machine learning algorithms can detect a person's outlook on life by reading their tweets. A sample of tweets from each user is manually annotated in order to establish ground truth labels, and classifiers are trained to distinguish between optimistic and pessimistic users. Our results suggest that the words in people's tweets provide ample evidence to identify them as optimists, pessimists, or somewhere in between. Additionally, several applications of these trained models are explored.
true
[]
[]
Good Health and Well-Being
null
null
We would like to thank Seong Ju Park, Tian Bao, and Yihan Li for their assistance in the initial project that led to this work. This material is based in part upon work supported by the National Science Foundation award #1344257 and by grant #48503 from the John Templeton Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation.
2016
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mirzaei-etal-2016-automatic
https://aclanthology.org/W16-4122
Automatic Speech Recognition Errors as a Predictor of L2 Listening Difficulties
This paper investigates the use of automatic speech recognition (ASR) errors as indicators of the second language (L2) learners' listening difficulties and in doing so strives to overcome the shortcomings of Partial and Synchronized Caption (PSC) system. PSC is a system that generates a partial caption including difficult words detected based on high speech rate, low frequency, and specificity. To improve the choice of words in this system, and explore a better method to detect speech challenges, ASR errors were investigated as a model of the L2 listener, hypothesizing that some of these errors are similar to those of language learners' when transcribing the videos. To investigate this hypothesis, ASR errors in transcription of several TED talks were analyzed and compared with PSC's selected words. Both the overlapping and mismatching cases were analyzed to investigate possible improvement for the PSC system. Those ASR errors that were not detected by PSC as cases of learners' difficulties were further analyzed and classified into four categories: homophones, minimal pairs, breached boundaries and negatives. These errors were embedded into the baseline PSC to make the enhanced version and were evaluated in an experiment with L2 learners. The results indicated that the enhanced version, which encompasses the ASR errors addresses most of the L2 learners' difficulties and better assists them in comprehending challenging video segments as compared with the baseline.
true
[]
[]
Quality Education
null
null
null
2016
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
forcada-2000-learning
https://aclanthology.org/2000.bcs-1.7
Learning machine translation strategies using commercial systems: discovering word reordering rules
null
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
patankar-etal-2022-optimize
https://aclanthology.org/2022.dravidianlangtech-1.36
Optimize\_Prime@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil
This paper tries to address the problem of abusive comment detection in low-resource indic languages. Abusive comments are statements that are offensive to a person or a group of people. These comments are targeted toward individuals belonging to specific ethnicities, genders, caste, race, sexuality, etc. Abusive Comment Detection is a significant problem, especially with the recent rise in social media users. This paper presents the approach used by our team-Optimize_Prime, in the ACL 2022 shared task "Abusive Comment Detection in Tamil." This task detects and classifies YouTube comments in Tamil and Tamil-English Codemixed format into multiple categories. We have used three methods to optimize our results: Ensemble models, Recurrent Neural Networks, and Transformers. In the Tamil data, MuRIL and XLM-RoBERTA were our best performing models with a macro-averaged f1 score of 0.43. Furthermore, for the Codemixed data, MuRIL and M-BERT provided sublime results, with a macro-averaged f1 score of 0.45.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
cieri-etal-2004-fisher
http://www.lrec-conf.org/proceedings/lrec2004/pdf/767.pdf
The Fisher Corpus: a Resource for the Next Generations of Speech-to-Text
This paper describes, within the context of the DARPA EARS program, the design and implementation of the Fisher protocol for collecting conversational telephone speech which has yielded more than 16,000 English conversations. It also discusses the Quick Transcription specification that allowed 2000 hours of Fisher audio to be transcribed in less than one year. Fisher data is already in use within the DARPA EARS programs and will be published via the Linguistic Data Consortium for general use beginning in 2004.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2020-query
https://aclanthology.org/2020.coling-industry.4
Query Distillation: BERT-based Distillation for Ensemble Ranking
Recent years have witnessed substantial progress in the development of neural ranking networks, but also an increasingly heavy computational burden due to growing numbers of parameters and the adoption of model ensembles. Knowledge Distillation (KD) is a common solution to balance the effectiveness and efficiency. However, it is not straightforward to apply KD to ranking problems. Ranking Distillation (RD) has been proposed to address this issue, but only shows effectiveness on recommendation tasks. We present a novel two-stage distillation method for ranking problems that allows a smaller student model to be trained while benefitting from the better performance of the teacher model, providing better control of the inference latency and computational burden. We design a novel BERT-based ranking model structure for list-wise ranking to serve as our student model. All ranking candidates are fed to the BERT model simultaneously, such that the self-attention mechanism can enable joint inference to rank the document list. Our experiments confirm the advantages of our method, not just with regard to the inference latency but also in terms of higher-quality rankings compared to the original teacher model.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-etal-2016-agreement
https://aclanthology.org/N16-1046
Agreement on Target-bidirectional Neural Machine Translation
Neural machine translation (NMT) with recurrent neural networks, has proven to be an effective technique for end-to-end machine translation. However, in spite of its promising advances over traditional translation methods, it typically suffers from an issue of unbalanced outputs, that arise from both the nature of recurrent neural networks themselves, and the challenges inherent in machine translation. To overcome this issue, we propose an agreement model for neural machine translation and show its effectiveness on large-scale Japaneseto-English and Chinese-to-English translation tasks. Our results show the model can achieve improvements of up to 1.4 BLEU over the strongest baseline NMT system. With the help of an ensemble technique, this new end-to-end NMT approach finally outperformed phrasebased and hierarchical phrase-based Moses baselines by up to 5.6 BLEU points.
false
[]
[]
null
null
null
We would like to thank the three anonymous reviewers for helpful comments and suggestions. In addition, we would like to thank Rico Sennrich for fruitful discussions.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rapp-2006-exploring
https://aclanthology.org/E06-2018
Exploring the Sense Distributions of Homographs
This paper quantitatively investigates in how far local context is useful to disambiguate the senses of an ambiguous word. This is done by comparing the co-occurrence frequencies of particular context words. First, one context word representing a certain sense is chosen, and then the co-occurrence frequencies with two other context words, one of the same and one of another sense, are compared. As expected, it turns out that context words belonging to the same sense have considerably higher co-occurrence frequencies than words belonging to different senses. In our study, the sense inventory is taken from the University of South Florida homograph norms, and the co-occurrence counts are based on the British National Corpus.
false
[]
[]
null
null
null
I would like to thank the three anonymous reviewers for their detailed and helpful comments.
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ding-etal-2020-coupling
https://aclanthology.org/2020.acl-main.595
Coupling Distant Annotation and Adversarial Training for Cross-Domain Chinese Word Segmentation
Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS). Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to outof-domain data. Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem. In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for crossdomain CWS. For distant annotation, we rethink the essence of "Chinese words" and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain. The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain. For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information. Experiments on multiple realworld datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-ofthe-art cross-domain CWS methods.
false
[]
[]
null
null
null
We sincerely thank all the reviewers for their insightful comments and suggestions. This research is partially supported by National Natural Science Foundation of China (Grant No. 61773229 and 61972219), the Basic Research Fund of Shenzhen City (Grand No. JCYJ20190813165003837), and Overseas Cooperation Research Fund of Graduate School at Shenzhen, Tsinghua University (Grant No. HW2018002).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
broeder-etal-2010-data
http://www.lrec-conf.org/proceedings/lrec2010/pdf/163_Paper.pdf
A Data Category Registry- and Component-based Metadata Framework
We describe our computer-supported framework to overcome the rule of metadata schism. It combines the use of controlled vocabularies, managed by a data category registry, with a component-based approach, where the categories can be combined to yield complex metadata structures. A metadata scheme devised in this way will thus be grounded in its use of categories. Schema designers will profit from existing prefabricated larger building blocks, motivating re-use at a larger scale. The common base of any two metadata schemes within this framework will solve, at least to a good extent, the semantic interoperability problem, and consequently, further promote systematic use of metadata for existing resources and tools to be shared.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sinclair-etal-2018-ability
https://aclanthology.org/W18-5005
Does Ability Affect Alignment in Second Language Tutorial Dialogue?
The role of alignment between interlocutors in second language learning is different to that in fluent conversational dialogue. Learners gain linguistic skill through increased alignment, yet the extent to which they can align will be constrained by their ability. Tutors may use alignment to teach and encourage the student, yet still must push the student and correct their errors, decreasing alignment. To understand how learner ability interacts with alignment, we measure the influence of ability on lexical priming, an indicator of alignment. We find that lexical priming in learner-tutor dialogues differs from that in conversational and task-based dialogues, and we find evidence that alignment increases with ability and with word complexity.
true
[]
[]
Quality Education
null
null
Thanks to Amy Isard, Maria Gorinova, Maria Wolters, Federico Fancellu, Sorcha Gilroy, Clara Vania and Marco Damonte as well as the three anonymous reviewers for their useful comments in relation to this paper. A. Sinclair especially acknowledges the help and support of Jon Oberlander during the early development of this idea.
2018
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
shudo-etal-2000-collocations
http://www.lrec-conf.org/proceedings/lrec2000/pdf/2.pdf
Collocations as Word Co-ocurrence Restriction Data - An Application to Japanese Word Processor -
null
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
khlyzova-etal-2022-complementarity
https://aclanthology.org/2022.wassa-1.1
On the Complementarity of Images and Text for the Expression of Emotions in Social Media
Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can "dance with music". Amazing!
false
[]
[]
null
null
null
This work was supported by Deutsche Forschungsgemeinschaft (project CEAT, KL 2869/1-2).
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
juhasz-etal-2019-tuefact
https://aclanthology.org/S19-2206
TueFact at SemEval 2019 Task 8: Fact checking in community question answering forums: context matters
The SemEval 2019 Task 8 on Fact-Checking in community question answering forums aimed to classify questions into categories and verify the correctness of answers given on the QatarLiving public forum. The task was divided into two subtasks: the first classifying the question, the second the answers. The Tue-Fact system described in this paper used different approaches for the two subtasks. Subtask A makes use of word vectors based on a bagof-word-ngram model using up to trigrams. Predictions are done using multi-class logistic regression. The official SemEval result lists an accuracy of 0.60. Subtask B uses vectorized character n-grams up to trigrams instead. Predictions are done using a LSTM model and achieved an accuracy of 0.53 on the final Se-mEval Task 8 evaluation set. In a comparison of contextual inputs to subtask B, it was determined that more contextual data improved results, but only up to a point.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
carreras-padro-2002-flexible
http://www.lrec-conf.org/proceedings/lrec2002/pdf/243.pdf
A Flexible Distributed Architecture for Natural Language Analyzers
Many modern NLP applications require basic language processors such as POS taggers, parsers, etc. All these tools are usually preexisting, and must be adapted to fit in the requirements of the application to be developed. This adaptation procedure is usually time consuming and increases the application development cost. Our proposal to minimize this effort is to use standard engineering solutions for software reusability. In that sense, we converted all our language processors to classes which may be instantiated and accessed from any application via a CORBA broker. Reusability is not the only advantatge, since the distributed CORBA approach also makes it possible to access the analyzers from any remote application, developed in any language, and running on any operating system.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meerkamp-zhou-2017-boosting
https://aclanthology.org/W17-4307
Boosting Information Extraction Systems with Character-level Neural Networks and Free Noisy Supervision
We present an architecture to boost the precision of existing information extraction systems. This is achieved by augmenting the existing parser, which may be constraint-based or hybrid statistical, with a character-level neural network. Our architecture combines the ability of constraint-based or hybrid extraction systems to easily incorporate domain knowledge with the ability of deep neural networks to leverage large amounts of data to learn complex features. The network is trained using a measure of consistency between extracted data and existing databases as a form of cheap, noisy supervision. Our architecture does not require large scale manual annotation or a system rewrite. It has led to large precision improvements over an existing, highly-tuned production information extraction system used at Bloomberg LP for financial language text.
false
[]
[]
null
null
null
We would like to thank my managers Alex Bozic, Tim Phelan, and Joshwini Pereira for supporting this project, as well as David Rosenberg from the CTO's office for providing access to GPU infrastructure.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
meng-wang-2009-mining
https://aclanthology.org/P09-2045
Mining User Reviews: from Specification to Summarization
This paper proposes a method to extract product features from user reviews and generate a review summary. This method only relies on product specifications, which usually are easy to obtain. Other resources like segmenter, POS tagger or parser are not required. At feature extraction stage, multiple specifications are clustered to extend the vocabulary of product features. Hierarchy structure information and unit of measurement information are mined from the specification to improve the accuracy of feature extraction. At summary generation stage, hierarchy information in specifications is used to provide a natural conceptual view of product features.
false
[]
[]
null
null
null
This research is supported by National Natural Science Foundation of Chinese (No.60675035) and Beijing Natural Science Foundation (No.4072012).
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mahata-etal-2017-bucc2017
https://aclanthology.org/W17-2511
BUCC2017: A Hybrid Approach for Identifying Parallel Sentences in Comparable Corpora
A Statistical Machine Translation (SMT) system is always trained using large parallel corpus to produce effective translation. Not only is the corpus scarce, it also involves a lot of manual labor and cost. Parallel corpus can be prepared by employing comparable corpora where a pair of corpora is in two different languages pointing to the same domain. In the present work, we try to build a parallel corpus for French-English language pair from a given comparable corpus. The data and the problem set are provided as part of the shared task organized by BUCC 2017. We have proposed a system that first translates the sentences by heavily relying on Moses and then group the sentences based on sentence length similarity. Finally, the one to one sentence selection was done based on Cosine Similarity algorithm.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yokoyama-2013-analysis
https://aclanthology.org/2013.mtsummit-wpt.3
Analysis of parallel structures in patent sentences, focusing on the head words
One of the characteristics of patent sentences is long, complicated modifications. A modification is identified by the presence of a head word in the modifier. We extracted head words with a high occurrence frequency from about 1 million patent sentences. Based on the results, we constructed a modifier correcting system using these head words. About 60% of the errors could be modified with our system.
false
[]
[]
null
null
null
We thank Japio and the committee members for supporting this research and supplying the patent database.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schulze-2001-loom
https://aclanthology.org/Y02-1039
The Loom-LAG for Syntax Analysis : Adding a Language-independent Level to LAG
The left-associative grammar model (LAG) has been applied successfully to the morphologic and syntactic analysis of various european and asian languages. The algebraic definition of the LAG is very well suited for the application to natural language processing as it inherently obeys de Saussure's second law (de Saussure, 1913, p. 103) on the linear nature of language, which phrase-structure grammar (PSG) and categorial grammar (CG) do not. This paper describes the so-called Loom-LAGs (LLAG)-a specialisation of LAGs for the analysis of natural language. Whereas the only means of language-independent abstraction in ordinary LAG is the principle of possible continuations, LLAGs introduce a set of more detailed language-independent generalisations that form the so-called loom of a Loom-LAG. Every LLAG uses the very same loom and adds the language-specific information in the form of a declarative description of the language-much like an ancient mechanised Jacquard-loom would take a program-card providing the specific pattern for the cloth to be woven. The linguistic information is formulated declaratively in so-called syntax plans that describe the sequential structure of clauses and phrases. This approach introduces the explicit notion of phrases and sentence structure to LAG without violating de Saussure's second law and without leaving the ground of the original algebraic definition of LAG. LLAGs can in fact be shown to be just a notational variant of LAG-but one that is much better suited for the manual development of syntax grammars for the robust analysis of free texts. 'For an in-depth discussion see (Hausser, 1989), and (Hausser, 2001).
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
balahur-turchi-2013-improving
https://aclanthology.org/R13-1007
Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data
Sentiment analysis is currently a very dynamic field in Computational Linguistics. Research herein has concentrated on the development of methods and resources for different types of texts and various languages. Nonetheless, the implementation of a multilingual system that is able to classify sentiment expressed in various languages has not been approached so far. The main challenge this paper addresses is sentiment analysis from tweets in a multilingual setting. We first build a simple sentiment analysis system for tweets in English. Subsequently, we translate the data from English to four other languages-Italian, Spanish, French and German-using a standard machine translation system. Further on, we manually correct the test data and create Gold Standards for each of the target languages. Finally, we test the performance of the sentiment analysis classifiers for the different languages concerned and show that the joint use of training data from multiple languages (especially those pertaining to the same family of languages) significantly improves the results of the sentiment classification.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lai-etal-2021-supervised
https://aclanthology.org/2021.paclic-1.62
Supervised Word Sense Disambiguation on Taiwan Hakka Polysemy with Neural Network Models: A Case Study of BUN, TUNG and LAU
This research aims to explore an optimal model for automatic word sense disambiguation for highly polysemous markers BUN, TUNG and LAU in Taiwan Hakka, a low-resource language. The performance of word sense disambiguation tasks is carried out by examining DNN, BiLSTM and CNN models under different window spans. The results show that the CNN model can achieve the best performance with a multiple sliding window of L2R2+ L3R3 and L5R5.
false
[]
[]
null
null
null
We would like to thank the PACLIC 2021 anonymous reviewers for the valuable comments on this paper and MOST grant (MOST-108-2410-H-004-050-MY3) for supporting the research discussed herein.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
abdul-mageed-etal-2020-nadi
https://aclanthology.org/2020.wanlp-1.9
NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task
We present the results and findings of the First Nuanced Arabic Dialect Identification Shared Task (NADI). This Shared Task includes two subtasks: country-level dialect identification (Subtask 1) and province-level sub-dialect identification (Subtask 2). The data for the shared task covers a total of 100 provinces from 21 Arab countries and are collected from the Twitter domain. As such, NADI is the first shared task to target naturally-occurring fine-grained dialectal text at the sub-country level. A total of 61 teams from 25 countries registered to participate in the tasks, thus reflecting the interest of the community in this area. We received 47 submissions for Subtask 1 from 18 teams and 9 submissions for Subtask 2 from 9 teams. The Arab world is an extensive geographical region across Africa and Asia, with a population of ∼ 400 million people whose native tongue is Arabic. Arabic could be classified into three major types: (1) Classical Arabic (CA), the language of the Qur'an and early literature, (2) Modern Standard Arabic (MSA), the medium used in education and formal and pan-Arab media, and (3) dialectal Arabic (DA), a host of geographically and politically defined variants. Modern day Arabic is also usually described as a diglossic language with a so-called 'High' variety that is used in formal settings (MSA), and a 'Low' variety that is the medium of everyday communication (DA). The presumably 'Low variety' is in reality a collection of variants. One axis of variation for Arabic is geography where people from various sub-regions, countries, or even provinces within the same country, may be using language differently. The goal of the First Nuanced Arabic Dialect Identification (NADI) Shared Task is to provide resources and encourage efforts to investigate questions focused on dialectal variation within the collection of Arabic variants. The NADI shared task targets 21 Arab countries and a total of 100 provinces across these countries. The shared task consists of two subtasks: country-level dialect identification (Subtask 1) and province-level detection (Subtask 2). We provide participants with a new Twitter labeled dataset that we collected exclusively for the purpose of the shared task. The dataset is publicly available for research. 1 A total of 52 teams registered for the shard task, of whom 18 teams ended up submitting their systems for scoring. We then
false
[]
[]
null
null
null
We gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (www.computecanada.ca). We thank AbdelRahim Elmadany for assisting with dataset preparation, setting up the Codalab for the shared task, and providing the map in Figure 2 .
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-kit-2012-higher
https://aclanthology.org/P12-2001
Higher-order Constituent Parsing and Parser Combination
This paper presents a higher-order model for constituent parsing aimed at utilizing more local structural context to decide the score of a grammar rule instance in a parse tree. Experiments on English and Chinese treebanks confirm its advantage over its first-order version. It achieves its best F1 scores of 91.86% and 85.58% on the two languages, respectively, and further pushes them to 92.80% and 85.60% via combination with other highperformance parsers.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
siddharthan-etal-2011-information
https://aclanthology.org/J11-4007
Information Status Distinctions and Referring Expressions: An Empirical Study of References to People in News Summaries
Although there has been much theoretical work on using various information status distinctions to explain the form of references in written text, there have been few studies that attempt to automatically learn these distinctions for generating references in the context of computerregenerated text. In this article, we present a model for generating references to people in news summaries that incorporates insights from both theory and a corpus analysis of human written summaries. In particular, our model captures how two properties of a person referred to in the summary-familiarity to the reader and global salience in the news story-affect the content and form of the initial reference to that person in a summary. We demonstrate that these two distinctions can be learned from a typical input for multi-document summarization and that they can be used to make regeneration decisions that improve the quality of extractive summaries.
false
[]
[]
null
null
null
Approaches to Reference (PRE-CogSci 2009). Grice, Paul. 1975. Logic and conversation. In P. Cole and J. L. Morgan, editors, Syntax and Semantics, volume 3. Academic Press, New York, pages 43-58. Grosz, Barbara, Aravind Joshi, and Scott Weinstein. 1995
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cer-etal-2010-best
https://aclanthology.org/N10-1080
The Best Lexical Metric for Phrase-Based Statistical MT System Optimization
Translation systems are generally trained to optimize BLEU, but many alternative metrics are available. We explore how optimizing toward various automatic evaluation metrics (BLEU, METEOR, NIST, TER) affects the resulting model. We train a state-of-the-art MT system using MERT on many parameterizations of each metric and evaluate the resulting models on the other metrics and also using human judges. In accordance with popular wisdom, we find that it's important to train on the same metric used in testing. However, we also find that training to a newer metric is only useful to the extent that the MT model's structure and features allow it to take advantage of the metric. Contrasting with TER's good correlation with human judgments, we show that people tend to prefer BLEU and NIST trained models to those trained on edit distance based metrics like TER or WER. Human preferences for METEOR trained models varies depending on the source language. Since using BLEU or NIST produces models that are more robust to evaluation by other metrics and perform well in human judgments, we conclude they are still the best choice for training.
false
[]
[]
null
null
null
The authors thank Alon Lavie for suggesting setting α to 0.5 when training to METEOR. This work was supported by the Defense Advanced Research Projects Agency through IBM. The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moran-etal-2018-cross
https://aclanthology.org/L18-1646
Cross-linguistically Small World Networks are Ubiquitous in Child-directed Speech
In this paper we use network theory to model graphs of child-directed speech from caregivers of children from nine typologically and morphologically diverse languages. With the resulting lexical adjacency graphs, we calculate the network statistics N, E, <k>, L, C and compare them against the standard baseline of the same parameters from randomly generated networks of the same size. We show that typologically and morphologically diverse languages all share small world properties in their child-directed speech. Our results add to the repertoire of universal distributional patterns found in the input to children cross-linguistically. We discuss briefly some implications for language acquisition research.
false
[]
[]
null
null
null
The research leading to these results has received funding from the European Unions Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 615988 (PI Sabine Stoll). We gratefully acknowledge Shanley Allen, Aylin Küntay, and Barbara Pfeiler, who provided the Inuktitut, Turkish, and Yucatec data for our analysis, respectively. We also thank three anonymous reviewers for their feedback.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wittmann-etal-2014-automatic
http://www.lrec-conf.org/proceedings/lrec2014/pdf/574_Paper.pdf
Automatic Extraction of Synonyms for German Particle Verbs from Parallel Data with Distributional Similarity as a Re-Ranking Feature
We present a method for the extraction of synonyms for German particle verbs based on a word-aligned German-English parallel corpus: by translating the particle verb to a pivot, which is then translated back, a set of synonym candidates can be extracted and ranked according to the respective translation probabilities. In order to deal with separated particle verbs, we apply reordering rules to the German part of the data. In our evaluation against a gold standard, we compare different pre-processing strategies (lemmatized vs. inflected forms) and introduce language model scores of synonym candidates in the context of the input particle verb as well as distributional similarity as additional re-ranking criteria. Our evaluation shows that distributional similarity as a re-ranking feature is more robust than language model scores and leads to an improved ranking of the synonym candidates. In addition to evaluating against a gold standard, we also present a small-scale manual evaluation.
false
[]
[]
null
null
null
This work was funded by the DFG Research Project "Distributional Approaches to Semantic Relatedness" (Moritz Wittmann, Marion Weller) and the DFG Heisenberg Fellowship SCHU-2580/1-1 (Sabine Schulte im Walde).
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shioda-etal-2017-suggesting
https://aclanthology.org/W17-5911
Suggesting Sentences for ESL using Kernel Embeddings
Sentence retrieval is an important NLP application for English as a Second Language (ESL) learners. ESL learners are familiar with web search engines, but generic web search results may not be adequate for composing documents in a specific domain. However, if we build our own search system specialized to a domain, it may be subject to the data sparseness problem. Recently proposed word2vec partially addresses the data sparseness problem, but fails to extract sentences relevant to queries owing to the modeling of the latent intent of the query. Thus, we propose a method of retrieving example sentences using kernel embeddings and N-gram windows. This method implicitly models latent intent of query and sentences, and alleviates the problem of noisy alignment. Our results show that our method achieved higher precision in sentence retrieval for ESL in the domain of a university press release corpus, as compared to a previous unsupervised method used for a semantic textual similarity task.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bonin-etal-2020-hbcp
https://aclanthology.org/2020.lrec-1.242
HBCP Corpus: A New Resource for the Analysis of Behavioural Change Intervention Reports
Due to the fast pace at which research reports in behaviour change are published, researchers, consultants and policymakers would benefit from more automatic ways to process these reports. Automatic extraction of the reports' intervention content, population, settings and their results etc. are essential in synthesising and summarising the literature. However, to the best of our knowledge, no unique resource exists at the moment to facilitate this synthesis. In this paper, we describe the construction of a corpus of published behaviour change intervention evaluation reports aimed at smoking cessation. We also describe and release the annotation of 57 entities, that can be used as an off-the-shelf data resource for tasks such as entity recognition, etc. Both the corpus and the annotation dataset are being made available to the community.
true
[]
[]
Good Health and Well-Being
null
null
null
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tauchmann-mieskes-2020-language
https://aclanthology.org/2020.lrec-1.822
Language Agnostic Automatic Summarization Evaluation
So far work on automatic summarization has dealt primarily with English data. Accordingly, evaluation methods were primarily developed with this language in mind. In our work, we present experiments of adapting available evaluation methods such as ROUGE and PYRAMID to non-English data. We base our experiments on various English and non-English homogeneous benchmark data sets as well as a non-English heterogeneous data set. Our results indicate that ROUGE can indeed be adapted to non-English data-both homogeneous and heterogeneous. Using a recent implementation of performing an automatic PYRAMID evaluation, we also show its adaptablilty to non-English data.
false
[]
[]
null
null
null
We would like to thank Yanjun Gao and Rebecca Passonneau for providing the PyrEval code as well as kindly assisting with related questions. This work has been supported by the research center for Digital Communication and Media Innovation (DKMI) and the Institute for Communication and Media (IKUM) at the University of Applied Sciences Darmstadt. Part of this research further received support from the German Research Foundation as part of the Research Training Group "Adaptive Preparation of Information from Heterogeneous Sources" (AIPHES) under grant No. GRK 1994/1.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moreau-vogel-2014-limitations
https://aclanthology.org/C14-1208
Limitations of MT Quality Estimation Supervised Systems: The Tails Prediction Problem
In this paper we address the question of the reliability of the predictions made by MT Quality Estimation (QE) systems. In particular, we show that standard supervised QE systems, usually trained to minimize MAE, make serious mistakes at predicting the quality of the sentences in the tails of the quality range. We describe the problem and propose several experiments to clarify their causes and effects. We use the WMT12 and WMT13 QE Shared Task datasets to prove that our claims hold in general and are not specific to a dataset or a system.
false
[]
[]
null
null
null
We are grateful to Lucia Specia, Radu Soricut and Christian Buck, the organizers of the WMT 2012 and 2013 Shared Task on Quality Estimation, for releasing all the data related to the competition, including post-edited sentences, features sets, etc.This research is supported by Science Foundation Ireland (Grant 12/CE/I2267) as part of the Centre for Next Generation Localisation (www.cngl.ie) funding at Trinity College, University of Dublin.The graphics in this paper were created with R (R Core Team, 2012), using the ggplot2 library (Wickham, 2009) .
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ejerhed-1990-swedish
https://aclanthology.org/W89-0102
A Swedish Clause Grammar And Its Implementation
The paper is concerned with the notion of clause as a basic, minimal unit for the segmentation and processing of natural language. The first part of the paper surveys various criteria for clausehood that have been proposed in theoretical linguistics and computational linguistics, cind pro poses that a clause in English or Swedish or any other natural language can be defined in structural terms at the surface level as a regular expression of syntactic categories, equivalently, as a set of sequences of word classes, a possibility which has been explicitly denied by Harris (1968) and later transformational grammarians. The second part of the paper presents a grammar for Swedish clauses, and a newspaper text segmented into clauses by an experimental clause parser intended for a speech synthesis applicar tion. The third part o f the paper presents some phonetic data concerning the distribution of
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rubin-vashchilko-2012-identification
https://aclanthology.org/W12-0415
Identification of Truth and Deception in Text: Application of Vector Space Model to Rhetorical Structure Theory
The paper proposes to use Rhetorical Structure Theory (RST) analytic framework to identify systematic differences between deceptive and truthful stories in terms of their coherence and structure. A sample of 36 elicited personal stories, self-ranked as completely truthful or completely deceptive, is manually analyzed by assigning RST discourse relations among a story's constituent parts. Vector Space Model (VSM) assesses each story's position in multi-dimensional RST space with respect to its distance to truth and deceptive centers as measures of the story's level of deception and truthfulness. Ten human judges evaluate if each story is deceptive or not, and assign their confidence levels, which produce measures of the human expected deception and truthfulness levels. The paper contributes to deception detection research and RST twofold: a) demonstration of discourse structure analysis in pragmatics as a prominent way of automated deception detection and, as such, an effective complement to lexico-semantic analysis, and b) development of RST-VSM methodology to interpret RST analysis in identification of previously unseen deceptive texts.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
This research is funded by the New Research and Scholarly Initiative Award (10-303) from the Academic Development Fund at Western.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
venturott-mitkov-2021-fake
https://aclanthology.org/2021.triton-1.16
Fake News Detection for Portuguese with Deep Learning
The exponential growth of the internet and social media in the past decade gave way to the increase in dissemination of false or misleading information. Since the 2016 US presidential election, the term "fake news" became increasingly popular and this phenomenon has received more attention. In the past years several fact-checking agencies were created, but due to the great number of daily posts on social media, manual checking is insufficient. Currently, there is a pressing need for automatic fake news detection tools, either to assist manual fact-checkers or to operate as standalone tools. There are several projects underway on this topic, but most of them focus on English. This research-in-progress paper discusses the employment of deep learning methods, and the development of a tool, for detecting false news in Portuguese. As a first step we shall compare well-established architectures that were tested in other languages and analyse their performance on our Portuguese data. Based on the preliminary results of these classifiers, we shall choose a deep learning model or combine several deep learning models which hold promise to enhance the performance of our fake news detection system.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
miao-etal-2020-diverse
https://aclanthology.org/2020.acl-main.92
A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers
We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
moreno-ortiz-etal-2002-new
http://www.lrec-conf.org/proceedings/lrec2002/pdf/181.pdf
New Developments in Ontological Semantics
In this paper we discuss ongoing activity within the approach to natural language processing known as ontological semantics, as defined in Nirenburg and Raskin (forthcoming). After a brief discussion of the principal tenets on which this approach is built, and a revision of extant implementations that have led toward its present form, we concentrate on some specific aspects that are key to the development of this approach, such as the acquisition of the semantics of lexical items and, intimately connected with this, the ontology, the central resource in this approach. Although we review the fundamentals of the approach, the focus is on practical aspects of implementation, such as the automation of static knowledge acquisition and the acquisition of scripts to enrich the ontology further.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-etal-2014-crab
https://aclanthology.org/C14-2017
CRAB 2.0: A text mining tool for supporting literature review in chemical cancer risk assessment
Chemical cancer risk assessment is a literature-dependent task which could greatly benefit from text mining support. In this paper we describe CRAB-the first publicly available tool for supporting the risk assessment workflow. CRAB, currently at version 2.0, facilitates the gathering of relevant literature via PubMed queries as well as semantic classification, statistical analysis and efficient study of the literature. The tool is freely available as an in-browser application.
true
[]
[]
Good Health and Well-Being
null
null
This work was supported by the Royal Society, Vinnova and the Swedish Research Council.
2014
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sun-grishman-2010-semi
https://aclanthology.org/C10-2137
Semi-supervised Semantic Pattern Discovery with Guidance from Unsupervised Pattern Clusters
We present a simple algorithm for clustering semantic patterns based on distributional similarity and use cluster memberships to guide semi-supervised pattern discovery. We apply this approach to the task of relation extraction. The evaluation results demonstrate that our novel bootstrapping procedure significantly outperforms a standard bootstrapping. Most importantly, our algorithm can effectively prevent semantic drift and provide semi-supervised learning with a natural stopping criterion.
false
[]
[]
null
null
null
We would like to thank Prof. Satoshi Sekine for his useful suggestions.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nayan-etal-2008-named
https://aclanthology.org/I08-5014
Named Entity Recognition for Indian Languages
Stub This paper talks about a new approach to recognize named entities for Indian languages. Phonetic matching technique is used to match the strings of different languages on the basis of their similar sounding property. We have tested our system with a comparable corpus of English and Hindi language data. This approach is language independent and requires only a set of rules appropriate for a language.
false
[]
[]
null
null
null
The authors gratefully acknowledge financial assistance from TDIL, MCIT (Govt. of India).
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mori-2002-stochastic
https://aclanthology.org/C02-1157
A Stochastic Parser Based on an SLM with Arboreal Context Trees
In this paper, we present a parser based on a stochastic structured language model (SLM) with a exible history reference mechanism. An SLM is an alternative t o a n n-gram model as a language model for a speech recognizer. The advantage of an SLM against an n-gram model is the ability to return the structure of a given sentence. Thus SLMs are expected to play an important part in spoken language understanding systems. The current SLMs refer to a xed part of the history for prediction just like a n n-gram model. We introduce a exible history reference mechanism called an ACT (arboreal context tree; an extension of the context tree to tree-shaped histories) and describe a parser based on an SLM with ACTs. In the experiment, we built an SLM-based parser with a xed history and one with ACTs, and compared their parsing accuracies. The accuracy of our parser was 92.8%, which was higher than that for the parser with the xed history (89.8%). This result shows that the exible history reference mechanism improves the parsing ability of an SLM, which has great importance for language understanding.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seyffarth-2019-identifying
https://aclanthology.org/W19-0115
Identifying Participation of Individual Verbs or VerbNet Classes in the Causative Alternation
Verbs that participate in diathesis alternations have different semantics in their different syntactic environments, which need to be distinguished in order to process these verbs and their contexts correctly. We design and implement 8 approaches to the automatic identification of the causative alternation in English (3 based on VerbNet classes, 5 based on individual verbs). For verbs in this alternation, the semantic roles that contribute to the meaning of the verb can be associated with different syntactic slots. Our most successful approaches use distributional vectors and achieve an F1 score of up to 79% on a balanced test set. We also apply our approaches to the distinction between the causative alternation and the unexpressed object alternation. Our best system for this is based on syntactic information, with an F1 score of 75% on a balanced test set.
false
[]
[]
null
null
null
The work presented in this paper was financed by the Deutsche Forschungsgemeinschaft (DFG) within the CRC 991 "The Structure of Representations in Language, Cognition, and Science". The author wishes to thank Laura Kallmeyer, Kilian Evang, Jakub Waszczuk, and three anonymous reviewers for their valuable feedback and helpful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brooke-etal-2012-building
https://aclanthology.org/W12-2205
Building Readability Lexicons with Unannotated Corpora
Lexicons of word difficulty are useful for various educational applications, including readability classification and text simplification. In this work, we explore automatic creation of these lexicons using methods which go beyond simple term frequency, but without relying on age-graded texts. In particular, we derive information for each word type from the readability of the web documents they appear in and the words they co-occur with, linearly combining these various features. We show the efficacy of this approach by comparing our lexicon with an existing coarse-grained, low-coverage resource and a new crowdsourced annotation.
true
[]
[]
Quality Education
null
null
This work was financially supported by the Natural Sciences and Engineering Research Council of Canada.
2012
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
zhou-etal-2019-unsupervised
https://aclanthology.org/D19-1192
Unsupervised Context Rewriting for Open Domain Conversation
Context modeling has a pivotal role in open domain conversation. Existing works either use heuristic methods or jointly learn context modeling and response generation with an encoder-decoder framework. This paper proposes an explicit context rewriting method, which rewrites the last utterance by considering context history. We leverage pseudoparallel data and elaborate a context rewriting network, which is built upon the Copy-Net with the reinforcement learning method. The rewritten utterance is beneficial to candidate retrieval, explainable context modeling, as well as enabling to employ a single-turn framework to the multi-turn scenario. The empirical results show that our model outperforms baselines in terms of the rewriting quality, the multi-turn response generation, and the end-to-end retrieval-based chatbots.
false
[]
[]
null
null
null
We are thankful to Yue Liu, Sawyer Zeng and Libin Shi for their supportive work. We also gratefully thank the anonymous reviewers for their insightful comments.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
merchant-1993-tipster
https://aclanthology.org/X93-1001
TIPSTER Program Overview
The task of TIPSTER Phase I was to advance the state of the art in two language technologies, Document Detection and Information Extraction. Document Detection includes two subtasks, routing (running static queries against a stream of new data), and retrieval (running ad hoc queries against archival data).
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kilgarriff-1997-using
https://aclanthology.org/W97-0122
Using Word Frequency Lists to Measure Corpus Homogeneity and Similarity between Corpora
How similar are two corpora? A measure of corpus similarity would be very useful for lexicography and language engineering. Word frequency lists are cheap and easy to generate so a measure based on them would be of use as a quick guide in many circumstances; for example, to judge how a newly available corpus related to existing resources, or how easy it might be to port an NLP system designed to work with one text type to work with another. We show that corpus similarity can only be interpreted in the light of corpus homogeneity. The paper presents a measure, based on the XX 2 statistic, for measuring both corpus similarity and corpus homogeneity. The measure is compared with a rank-based measure and shown to outperform it. Some results are presented. A method for evaluating the accuracy of the measure is introduced and some results of using the measure are presented.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
song-lee-2017-learning
https://aclanthology.org/E17-2116
Learning User Embeddings from Emails
Many important email-related tasks, such as email classification or search, highly rely on building quality document representations (e.g., bag-of-words or key phrases) to assist matching and understanding. Despite prior success on representing textual messages, creating quality user representations from emails was overlooked. In this paper, we propose to represent users using embeddings that are trained to reflect the email communication network. Our experiments on Enron dataset suggest that the resulting embeddings capture the semantic distance between users. To assess the quality of embeddings in a real-world application, we carry out auto-foldering task where the lexical representation of an email is enriched with user embedding features. Our results show that folder prediction accuracy is improved when embedding features are present across multiple settings.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
xiong-etal-2019-open
https://aclanthology.org/D19-1521
Open Domain Web Keyphrase Extraction Beyond Language Modeling
This paper studies keyphrase extraction in real-world scenarios where documents are from diverse domains and have variant content quality. We curate and release OpenKP, a large scale open domain keyphrase extraction dataset with near one hundred thousand web documents and expert keyphrase annotations. To handle the variations of domain and content quality, we develop BLING-KPE, a neural keyphrase extraction model that goes beyond language understanding using visual presentations of documents and weak supervision from search queries. Experimental results on OpenKP confirm the effectiveness of BLING-KPE and the contributions of its neural architecture, visual features, and search log weak supervision. Zero-shot evaluations on DUC-2001 demonstrate the improved generalization ability of learning from the open domain data compared to a specific domain.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-kubler-2011-filling
https://aclanthology.org/W11-0323
Filling the Gap: Semi-Supervised Learning for Opinion Detection Across Domains
We investigate the use of Semi-Supervised Learning (SSL) in opinion detection both in sparse data situations and for domain adaptation. We show that co-training reaches the best results in an in-domain setting with small labeled data sets, with a maximum absolute gain of 33.5%. For domain transfer, we show that self-training gains an absolute improvement in labeling accuracy for blog data of 16% over the supervised approach with target domain training data.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false