ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
nogueira-cho-2017-task
https://aclanthology.org/D17-1061
Task-Oriented Query Reformulation with Reinforcement Learning
Search engines play an important role in our everyday lives by assisting us in finding the information we need. When we input a complex query, however, results are often far from satisfactory. In this work, we introduce a query reformulation system based on a neural network that rewrites a query to maximize the number of relevant documents returned. We train this neural network with reinforcement learning. The actions correspond to selecting terms to build a reformulated query, and the reward is the document recall. We evaluate our approach on three datasets against strong baselines and show a relative improvement of 5-20% in terms of recall. Furthermore, we present a simple method to estimate a conservative upperbound performance of a model in a particular environment and verify that there is still large room for improvements.
false
[]
[]
null
null
null
RN is funded by Coordenao de Aperfeioamento de Pessoal de Nvel Superior (CAPES). KC thanks support by Facebook, Google and NVIDIA. This work was partly funded by the Defense Advanced Research Projects Agency (DARPA) D3M program. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
monsalve-etal-2019-assessing
https://aclanthology.org/W19-4010
Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation
Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.
false
[]
[]
null
null
null
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 825299. Besides, we acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this study. Finally, the first author is granted by the "Programa de apoyo al desarrollo de tesis de licenciatura" (Support programme of undergraduate thesis development, PADET 2018, PUCP).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
prolo-2006-handling
https://aclanthology.org/W06-1520
Handling Unlike Coordinated Phrases in TAG by Mixing Syntactic Category and Grammatical Function
Coordination of phrases of different syntactic categories has posed a problem for generative systems based only on syntactic categories. Although some prefer to treat them as exceptional cases that should require some extra mechanism (as for elliptical constructions), or to allow for unrestricted cross-category coordination, they can be naturally derived in a grammatic functional generative approach. In this paper we explore the ideia on how mixing syntactic categories and grammatical functions in the label set of a Tree Adjoining Grammar allows us to develop grammars that elegantly handle both the cases of same-and cross-category coordination in an uniform way.
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bateman-etal-2002-brief
https://aclanthology.org/W02-1703
A Brief Introduction to the GeM Annotation Schema for Complex Document Layout
In this paper we sketch the design, motivation and use of the GeM annotation scheme: an XML-based annotation framework for preparing corpora involving documents with complex layout of text, graphics, diagrams, layout and other navigational elements. We set out the basic organizational layers, contrast the technical approach with some other schemes for complex markup in the XML tradition, and indicate some of the applications we are pursuing.
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kreutzer-etal-2020-inference
https://aclanthology.org/2020.emnlp-main.465
Inference Strategies for Machine Translation with Conditional Masking
Conditional masked language model (CMLM) training has proven successful for nonautoregressive and semi-autoregressive sequence generation tasks, such as machine translation. Given a trained CMLM, however, it is not clear what the best inference strategy is. We formulate masked inference as a factorization of conditional probabilities of partial sequences, show that this does not harm performance, and investigate a number of simple heuristics motivated by this perspective. We identify a thresholding strategy that has advantages over the standard "mask-predict" algorithm, and provide analyses of its behavior on machine translation tasks.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
knight-koehn-2003-whats
https://aclanthology.org/N03-5005
What's New in Statistical Machine Translation
null
false
[]
[]
null
null
null
null
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
miquel-ribe-rodriguez-2011-cultural
https://aclanthology.org/R11-1044
Cultural Configuration of Wikipedia: measuring Autoreferentiality in Different Languages
Among the motivations to write in Wikipedia given by the current literature there is often coincidence, but none of the studies presents the hypothesis of contributing for the visibility of the own national or language related content. Similar to topical coverage studies, we outline a method which allows collecting the articles of this content, to later analyse them in several dimensions. To prove its universality, the tests are repeated for up to twenty language editions of Wikipedia. Finally, through the best indicators from each dimension we obtain an index which represents the degree of autoreferentiality of the encyclopedia. Last, we point out the impact of this fact and the risk of not considering its existence in the design of applications based on user generated content.
false
[]
[]
null
null
null
This work has been partially funded by KNOW2 (TIN2009-14715-C04-04) Eduard Aibar, Amical Viquipèdia, Joan Campàs, Marcos Faúndez. Diana Petri, Pere Tuset, Fina Ribé, Jordi Miquel, Joan Ribé, Peius Cotonat.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stambolieva-2011-parallel
https://aclanthology.org/W11-4306
Parallel Corpora in Aspectual Studies of Non-Aspect Languages
The paper presents the first results, for Bulgarian and English, of a multilingual Trans-Verba project in progress at the NBU Laboratory for Language Technologies. The project explores the possibility to use Bulgarian translation equivalents in parallel corpora and translation memories as a metalanguage in assigning aspectual values to "non-aspect" language equivalents. The resulting subcorpora of Perfective Aspect and Imperfective Aspect units are then quantitatively analysed and concordanced to obtain parameters of aspectual build-up.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lee-etal-2011-discriminative
https://aclanthology.org/P11-1089
A Discriminative Model for Joint Morphological Disambiguation and Dependency Parsing
Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the "pipeline" approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other. In this paper, we propose a discriminative model that jointly infers morphological properties and syntactic structures. In evaluations on various highly-inflected languages, this joint model outperforms both a baseline tagger in morphological disambiguation, and a pipeline parser in head selection.
false
[]
[]
null
null
null
We thank David Bamman and Gregory Crane for their feedback and support. Part of this research was performed by the first author while visiting Perseus Digital Library at Tufts University, under the grants A Reading Environment for Arabic and Islamic Culture, Department of Education (P017A060068-08) and The Dynamic Lexicon: Cyberinfrastructure and the Automatic Analysis of Historical Languages, National Endowment for the Humanities (PR-50013-08). The latter two authors were supported by Army prime contract #W911NF-07-1-0216 and University of Pennsylvania subaward #103-548106; by SRI International subcontract #27-001338 and ARFL prime contract #FA8750-09-C-0181; and by the Center for Intelligent Information Retrieval. Any opinions, findings, and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsors.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chang-etal-2013-constrained
https://aclanthology.org/D13-1057
A Constrained Latent Variable Model for Coreference Resolution
Coreference resolution is a well known clustering task in Natural Language Processing. In this paper, we describe the Latent Left Linking model (L 3 M), a novel, principled, and linguistically motivated latent structured prediction approach to coreference resolution. We show that L 3 M admits efficient inference and can be augmented with knowledge-based constraints; we also present a fast stochastic gradient based learning. Experiments on ACE and Ontonotes data show that L 3 M and its constrained version, CL 3 M, are more accurate than several state-of-the-art approaches as well as some structured prediction models proposed in the literature.
false
[]
[]
null
null
null
null
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
appelt-hobbs-1990-making
https://aclanthology.org/H90-1012
Making Abduction More Efficient
The TACITUS system uses a cost-based abduction scheme for finding and choosing among possible interpretations for natural language texts. Ordinary Prologstyle, backchaining deduction is augmented with the capability of making assumptions and of factoring two goal literals that are unifiable (see Hobbs et al., 1988) . Deduction is combinatorially explosive, and since the abduction scheme augments deduction with two more options at each node--assumption and factoring--it is even more explosive. We have been engaged in an empirical investigation of the behavior of this abductive scheme on a knowledge base of nearly 400 axioms, performing relatively sophisticated linguistic processing. So far, we have begun to experiment, with good results, with three different techniques for controlling abduction--a type hierarchy, unwinding or avoiding transitivity axioms, and various heuristics for reducing the branch factor of the search.
false
[]
[]
null
null
null
null
1990
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
roxas-borra-2000-panel
https://aclanthology.org/P00-1074
Panel: Computational Linguistics Research on Philippine Languages
This is a paper that describes computational linguistic activities on Philippines languages. The Philippines is an archipelago with vast numbers of islands and numerous languages. The tasks of understanding, representing and implementing these languages require enormous work. An extensive amount of work has been done on understanding at least some of the major Philippine languages, but little has been done on the computational aspect. Majority of the latter has been on the purpose of machine translation.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
r-l-m-2020-nitk
https://aclanthology.org/2020.fnp-1.9
NITK NLP at FinCausal-2020 Task 1 Using BERT and Linear models.
FinCausal-2020 is the shared task which focuses on the causality detection of factual data for financial analysis. The financial data facts don't provide much explanation on the variability of these data. This paper aims to propose an efficient method to classify the data into one which is having any financial cause or not. Many models were used to classify the data, out of which SVM model gave an F-Score of 0.9435, BERT with specific fine-tuning achieved best results with F-Score of 0.9677.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
parida-etal-2020-odianlps
https://aclanthology.org/2020.wat-1.10
ODIANLP's Participation in WAT2020
This paper describes the team ("ODIANLP")'s submission to WAT 2020. We have participated in the English→Hindi Multimodal task and Indic task. We have used the state-ofthe-art Transformer model for the translation task and InceptionResNetV2 for the Hindi Image Captioning task. Our submission tops in English→Hindi Multimodal task in its track and Odia↔English translation tasks. Also, our submissions performed well in the Indic Multilingual tasks.
false
[]
[]
null
null
null
At Idiap, the work was supported by the EU H2020 project "Real-time network, text, and speaker analytics for combating organized crime" (ROX-ANNE), grant agreement: 833635.At Charles University, the work was supported by the grants 19-26934X (NEUREM3) of the Czech Science Foundation and "Progress" Q18+Q48 of Charles University, and using language resources distributed by the LIN-DAT/CLARIN project of the Ministry of
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
filimonov-harper-2011-syntactic
https://aclanthology.org/D11-1064
Syntactic Decision Tree LMs: Random Selection or Intelligent Design?
Decision trees have been applied to a variety of NLP tasks, including language modeling, for their ability to handle a variety of attributes and sparse context space. Moreover, forests (collections of decision trees) have been shown to substantially outperform individual decision trees. In this work, we investigate methods for combining trees in a forest, as well as methods for diversifying trees for the task of syntactic language modeling. We show that our tree interpolation technique outperforms the standard method used in the literature, and that, on this particular task, restricting tree contexts in a principled way produces smaller and better forests, with the best achieving an 8% relative reduction in Word Error Rate over an n-gram baseline.
false
[]
[]
null
null
null
We would like to thank Ariya Rastrow for providing word lattices for the ASR rescoring experiments.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
burlot-2019-lingua
https://aclanthology.org/W19-5310
Lingua Custodia at WMT'19: Attempts to Control Terminology
This paper describes Lingua Custodia's submission to the WMT'19 news shared task for German-to-French on the topic of the EU elections. We report experiments on the adaptation of the terminology of a machine translation system to a specific topic, aimed at providing more accurate translations of specific entities like political parties and person names, given that the shared task provided no in-domain training parallel data dealing with the restricted topic. Our primary submission to the shared task uses backtranslation generated with a type of decoding allowing the insertion of constraints in the output in order to guarantee the correct translation of specific terms that are not necessarily observed in the data.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fang-cohn-2017-model
https://aclanthology.org/P17-2093
Model Transfer for Tagging Low-resource Languages using a Bilingual Dictionary
Cross-lingual model transfer is a compelling and popular method for predicting annotations in a low-resource language, whereby parallel corpora provide a bridge to a high-resource language and its associated annotated corpora. However, parallel data is not readily available for many languages, limiting the applicability of these approaches. We address these drawbacks in our framework which takes advantage of cross-lingual word embeddings trained solely on a high coverage bilingual dictionary. We propose a novel neural network model for joint training from both sources of data based on cross-lingual word embeddings, and show substantial empirical improvements over baseline techniques. We also propose several active learning heuristics, which result in improvements over competitive benchmark methods.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mcintyre-1998-babel-testbed
https://aclanthology.org/P98-2137
Babel: A Testbed for Research in Origins of Language
We believe that language is a complex adaptive system that emerges from adaptive interactions between language users and continues to evolve and adapt through repeated interactions. Our research looks at the mechanisms and processes involved in such emergence and adaptation. To provide a basis for our computer simulations, we have implemented an open-ended, extensible testbed called Babel which allows rapid construction of experiments and flexible visualization of results.
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gust-reddig-1982-logic
https://aclanthology.org/C82-2026
A LOGIC-ORIENTED ATN: Grammar Knowledge as Part of the System's Knowledge
The eystem BACON (Berlin Autcmatle COnstruction for semantic Networks) is an experimental intelligent qaestion--answerlng system with a nataral la~uage interface based on single sentence input 1. 1 This system has been developed in the project "Automatisehe Erstellung eemantischer Hetze" (Automatic construction of emantic networks) at the Institute of Applied Compuger Scienoe at the Technical University of Berlin. The pro~eet was supported by the Ministry for Science and Technology (BMFT) of the Federal Republic of German~. Explanations of the s~stems structure:
false
[]
[]
null
null
null
null
1982
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gianfortoni-etal-2011-modeling
https://aclanthology.org/W11-2606
Modeling of Stylistic Variation in Social Media with Stretchy Patterns
In this paper we describe a novel feature discovery technique that can be used to model stylistic variation in sociolects. While structural features offer much in terms of expressive power over simpler features used more frequently in machine learning approaches to modeling linguistic variation, they frequently come at an excessive cost in terms of feature space size expansion. We propose a novel form of structural features referred to as "stretchy patterns" that strike a balance between expressive power and compactness in order to enable modeling stylistic variation with reasonably small datasets. As an example we focus on the problem of modeling variation related to gender in personal blogs. Our evaluation demonstrates a significant improvement over standard baselines.
false
[]
[]
null
null
null
This research was funded by ONR grant N000141110221 and NSF DRL-0835426.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
spitkovsky-etal-2011-punctuation
https://aclanthology.org/W11-0303
Punctuation: Making a Point in Unsupervised Dependency Parsing
We show how punctuation can be used to improve unsupervised dependency parsing. Our linguistic analysis confirms the strong connection between English punctuation and phrase boundaries in the Penn Treebank. However, approaches that naively include punctuation marks in the grammar (as if they were words) do not perform well with Klein and Manning's Dependency Model with Valence (DMV). Instead, we split a sentence at punctuation and impose parsing restrictions over its fragments. Our grammar inducer is trained on the Wall Street Journal (WSJ) and achieves 59.5% accuracy out-of-domain (Brown sentences with 100 or fewer words), more than 6% higher than the previous best results. Further evaluation, using the 2006/7 CoNLL sets, reveals that punctuation aids grammar induction in 17 of 18 languages, for an overall average net gain of 1.3%. Some of this improvement is from training, but more than half is from parsing with induced constraints, in inference. Punctuation-aware decoding works with existing (even already-trained) parsing models and always increased accuracy in our experiments.
false
[]
[]
null
null
null
Partially funded by the Air Force Research Laboratory (AFRL), under prime contract no. FA8750-09-C-0181, and by NSF, via award #IIS-0811974. We thank Omri Abend, Slav Petrov and anonymous reviewers for many helpful suggestions, and we are especially grateful to Jenny R. Finkel for shaming us into using punctuation, to Christopher D. Manning for reminding us to explore "punctuation as words" baselines, and to Noah A. Smith for encouraging us to test against languages other than English.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rasanen-driesen-2009-comparison
https://aclanthology.org/W09-4640
A comparison and combination of segmental and fixed-frame signal representations in NMF-based word recognition
Segmental and fixed-frame signal representations were compared in different noise conditions in a weakly supervised word recognition task using a non-negative matrix factorization (NMF) framework. The experiments show that fixed-frame windowing results in better recognition rates with clean signals. When noise is introduced to the system, robustness of segmental signal representations becomes useful, decreasing the overall word error rate. It is shown that a combination of fixed-frame and segmental representations yields the best recognition rates in different noise conditions. An entropy based method for dynamically adjusting the weight between representations is also introduced, leading to near-optimal weighting and therefore enhanced recognition rates in varying SNR conditions.
false
[]
[]
null
null
null
This research is funded as part of the EU FP6 FET project Acquisition of Communication and Recognition Skills (ACORNS), contract no. FP6-034362.
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2019-stac
https://aclanthology.org/W19-2608
STAC: Science Toolkit Based on Chinese Idiom Knowledge Graph
Chinese idioms (Cheng Yu) have seen five thousand years' history and culture of China, meanwhile they contain large number of scientific achievement of ancient China. However, existing Chinese online idiom dictionaries have limited function for scientific exploration. In this paper, we first construct a Chinese idiom knowledge graph by extracting domains and dynasties and associating them with idioms, and based on the idiom knowledge graph, we propose a Science Toolkit for Ancient China (STAC) aiming to support scientific exploration. In the STAC toolkit, idiom navigator helps users explore overall scientific progress from idiom perspective with visualization tools, and idiom card and idiom QA shorten action path and avoid thinking being interrupted while users are reading and writing. The current STAC toolkit is deployed at
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lopez-etal-2011-automatic
https://aclanthology.org/R11-1106
Automatic titling of Articles Using Position and Statistical Information
This paper describes a system facilitating information retrieval in a set of textual documents by tackling the automatic titling and subtitling issue. Automatic titling here consists in extracting relevant noun phrases from texts as candidate titles. An original approach combining statistical criteria and noun phrases positions in the text helps collecting relevant titles and subtitles. So, the user may benefit from an outline of all the subjects evoked in a mass of documents, and easily find the information he/she is looking for. An evaluation on real data shows that the solutions given by this automatic titling approach are relevant.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aksenova-etal-2016-morphotactics
https://aclanthology.org/W16-2019
Morphotactics as Tier-Based Strictly Local Dependencies
It is commonly accepted that morphological dependencies are finite-state in nature. We argue that the upper bound on morphological expressivity is much lower. Drawing on technical results from computational phonology, we show that a variety of morphotactic phenomena are tierbased strictly local and do not fall into weaker subclasses such as the strictly local or strictly piecewise languages. Since the tier-based strictly local languages are learnable in the limit from positive texts, this marks a first important step towards general machine learning algorithms for morphology. Furthermore, the limitation to tier-based strictly local languages explains typological gaps that are puzzling from a purely linguistic perspective.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chisholm-etal-2017-learning
https://aclanthology.org/E17-1060
Learning to generate one-sentence biographies from Wikidata
We investigate the generation of onesentence Wikipedia biographies from facts derived from Wikidata slot-value pairs. We train a recurrent neural network sequence-to-sequence model with attention to select facts and generate textual summaries. Our model incorporates a novel secondary objective that helps ensure it generates sentences that contain the input facts. The model achieves a BLEU score of 41, improving significantly upon the vanilla sequence-to-sequence model and scoring roughly twice that of a simple template baseline. Human preference evaluation suggests the model is nearly as good as the Wikipedia reference. Manual analysis explores content selection, suggesting the model can trade the ability to infer knowledge against the risk of hallucinating incorrect information.
false
[]
[]
null
null
null
This work was supported by a Google Faculty Research Award (Chisholm) and an Australian Research Council Discovery Early Career Researcher Award (DE120102900, Hachey). Many thanks to reviewers for insightful comments and suggestions, and to Glen Pink, Kellie Webster, Art Harol and Bo Han for feedback at various stages.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiang-etal-2021-improved
https://aclanthology.org/2021.rocling-1.38
Improved Text Classification of Long-term Care Materials
Aging populations have posed a challenge to many countries including Taiwan, and with them come the issue of long-term care. Given the current context, the aim of this study was to explore the hotly-discussed subtopics in the field of long-term care, and identify its features through NLP. Texts from forums and websites were utilized for data collection and analysis. The study applied TF-IDF, the logistic regression model, and the naive Bayes classifier to process data. In sum, the results showed that it reached a F1-score of 0.92 in identification, and a best accuracy of 0.71 in classification. Results of the study found that apart from TF-IDF features, certain words could be elicited as favorable features in classification. The results of this study could be used as a reference for future long-term care related applications.
true
[]
[]
Good Health and Well-Being
null
null
null
2021
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lukasik-etal-2016-hawkes
https://aclanthology.org/P16-2064
Hawkes Processes for Continuous Time Sequence Classification: an Application to Rumour Stance Classification in Twitter
Classification of temporal textual data sequences is a common task in various domains such as social media and the Web. In this paper we propose to use Hawkes Processes for classifying sequences of temporal textual data, which exploit both temporal and textual information. Our experiments on rumour stance classification on four Twitter datasets show the importance of using the temporal information of tweets along with the textual content.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
The work was supported by the European Union under grant agreement No. 611233 PHEME. Cohn was supported by an ARC Future Fellowship scheme (project number FT130101105).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
kwong-2009-graphemic
https://aclanthology.org/W09-3537
Graphemic Approximation of Phonological Context for English-Chinese Transliteration
Although direct orthographic mapping has been shown to outperform phoneme-based methods in English-to-Chinese (E2C) transliteration, it is observed that phonological context plays an important role in resolving graphemic ambiguity. In this paper, we investigate the use of surface graphemic features to approximate local phonological context for E2C. In the absence of an explicit phonemic representation of the English source names, experiments show that the previous and next character of a given English segment could effectively capture the local context affecting its expected pronunciation, and thus its rendition in Chinese.
false
[]
[]
null
null
null
The work described in this paper was substantially supported by a grant from City University of Hong Kong (Project No. 7002203).
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
park-rim-2005-maximum
https://aclanthology.org/W05-0632
Maximum Entropy Based Semantic Role Labeling
The semantic role labeling (SRL) refers to finding the semantic relation (e.g. Agent, Patient, etc.) between a predicate and syntactic constituents in the sentences. Especially, with the argument information of the predicate, we can derive the predicateargument structures, which are useful for the applications such as automatic information extraction. As previous work on the SRL, there have been many machine learning approaches. (Gildea and Jurafsky, 2002; Pradhan et al., 2003; Lim et al., 2004) .
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gartner-etal-2015-multi
https://aclanthology.org/P15-4005
Multi-modal Visualization and Search for Text and Prosody Annotations
We present ICARUS for intonation, an interactive tool to browse and search automatically derived descriptions of fundamental frequency contours. It offers access to tonal features in combination with other annotation layers like part-ofspeech, syntax or coreference and visualizes them in a highly customizable graphical interface with various playback functions. The built-in search allows multilevel queries, the construction of which can be done graphically or textually, and includes the ability to search F 0 contours based on various similarity measures.
false
[]
[]
null
null
null
This work was funded by the German Federal Ministry of Education and Research (BMBF) via CLARIN-D, No. 01UG1120F and the German Research Foundation (DFG) via the SFB 732, project INF.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dobrovoljc-nivre-2016-universal
https://aclanthology.org/L16-1248
The Universal Dependencies Treebank of Spoken Slovenian
This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality.
false
[]
[]
null
null
null
The work presented in this paper has been partially supported by the Young Researcher Programme of the Slovenian Research Agency and Parseme ICT COST Action IC1207 STSM grant.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jayanthi-pratapa-2021-study
https://aclanthology.org/2021.sigmorphon-1.6
A Study of Morphological Robustness of Neural Machine Translation
In this work, we analyze the robustness of neural machine translation systems towards grammatical perturbations in the source. In particular, we focus on morphological inflection related perturbations. While this has been recently studied for English→French translation (MORPHEUS) (Tan et al., 2020), it is unclear how this extends to Any→English translation systems. We propose MORPHEUS-MULTILINGUAL that utilizes UniMorph dictionaries to identify morphological perturbations to source that adversely affect the translation models. Along with an analysis of stateof-the-art pretrained MT systems, we train and analyze systems for 11 language pairs using the multilingual TED corpus (Qi et al., 2018). We also compare this to actual errors of nonnative speakers using Grammatical Error Correction datasets. Finally, we present a qualitative and quantitative analysis of the robustness of Any→English translation systems. Code for our work is publicly available. 1 * Equal contribution.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sleimi-gardent-2016-generating
https://aclanthology.org/W16-3511
Generating Paraphrases from DBPedia using Deep Learning
Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.
false
[]
[]
null
null
null
We thank the French National Research Agency for funding the research presented in this paper in the context of the WebNLG project ANR-14-CE24-0033.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liberman-2018-corpus
https://aclanthology.org/W18-3801
Corpus Phonetics: Past, Present, and Future
Semi-automatic analysis of digital speech collections is transforming the science of phonetics, and offers interesting opportunities to researchers in other fields. Convenient search and analysis of large published bodies of recordings, transcripts, metadata, and annotations-as much as three or four orders of magnitude larger than a few decades ago-has created a trend towards "corpus phonetics," whose benefits include greatly increased researcher productivity, better coverage of variation in speech patterns, and essential support for reproducibility. The results of this work include insight into theoretical questions at all levels of linguistic analysis, as well as applications in fields as diverse as psychology, sociology, medicine, and poetics, as well as within phonetics itself. Crucially, analytic inputs include annotation or categorization of speech recordings along many dimensions, from words and phrase structures to discourse structures, speaker attitudes, speaker demographics, and speech styles. Among the many near-term opportunities in this area we can single out the possibility of improving parsing algorithms by incorporating features from speech as well as text.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yuan-li-2007-breath
https://aclanthology.org/O07-3002
The Breath Segment in Expressive Speech
This paper, based on a selected one hour of expressive speech, is a pilot study on how to use breath segments to get more natural and expressive speech. It mainly deals with the status of when the breath segments occur and how the acoustic features are affected by the speaker 's emotional states in terms of valence and activation. Statistical analysis is made to investigate the relationship between the length and intensity of the breath segments and the two state parameters. Finally, a perceptual experiment is conducted by employing the analysis results to synthesized speech, the results of which demonstrate that breath segment insertion can help improve the expressiveness and naturalness of the synthesized speech.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dsouza-etal-2021-semeval
https://aclanthology.org/2021.semeval-1.44
SemEval-2021 Task 11: NLPContributionGraph - Structuring Scholarly NLP Contributions for a Research Knowledge Graph
There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPCONTRI-BUTIONGRAPH (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the firstof-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentencelevel annotations comprised the few sentences about the article's contribution. The phraselevel annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
We thank the anonymous reviewers for their comments and suggestions. This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and by the TIB Leibniz Information Centre for Science and Technology.
2021
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
wilson-etal-2015-detection
https://aclanthology.org/D15-1307
Detection of Steganographic Techniques on Twitter
We propose a method to detect hidden data in English text. We target a system previously thought secure, which hides messages in tweets. The method brings ideas from image steganalysis into the linguistic domain, including the training of a feature-rich model for detection. To identify Twitter users guilty of steganography, we aggregate evidence; a first, in any domain. We test our system on a set of 1M steganographic tweets, and show it to be effective.
false
[]
[]
null
null
null
This paper aims to attack CoverTweet statistically. We are in the shoes of the warden, attempting to classify stego objects from innocent 2 cover objects. We propose techniques new to linguistic steganalysis, including a large set of features that detect unusual and inconsistent use of language and the aggregation of evidence from multiple sentences. This last development, known in the steganographic literature as pooled steganalysis (Ker, 2007) , represents a first in both linguistic and image steganalysis.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
linzen-jaeger-2014-investigating
https://aclanthology.org/W14-2002
Investigating the role of entropy in sentence processing
We outline four ways in which uncertainty might affect comprehension difficulty in human sentence processing. These four hypotheses motivate a self-paced reading experiment, in which we used verb subcategorization distributions to manipulate the uncertainty over the next step in the syntactic derivation (single step entropy) and the surprisal of the verb's complement. We additionally estimate wordby-word surprisal and total entropy over parses of the sentence using a probabilistic context-free grammar (PCFG). Surprisal and total entropy, but not single step entropy, were significant predictors of reading times in different parts of the sentence. This suggests that a complete model of sentence processing should incorporate both entropy and surprisal.
false
[]
[]
null
null
null
We thank Alec Marantz for discussion and Andrew Watts for technical assistance. This work was supported by an Alfred P. Sloan Fellowship to T. Florian Jaeger.
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lawson-etal-2010-annotating
https://aclanthology.org/W10-0712
Annotating Large Email Datasets for Named Entity Recognition with Mechanical Turk
Amazon's Mechanical Turk service has been successfully applied to many natural language processing tasks. However, the task of named entity recognition presents unique challenges. In a large annotation task involving over 20,000 emails, we demonstrate that a compet itive bonus system and interannotator agree ment can be used to improve the quality of named entity annotations from Mechanical Turk. We also build several statistical named entity recognition models trained with these annotations, which compare favorably to sim ilar models trained on expert annotations.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
srikumar-roth-2013-modeling
https://aclanthology.org/Q13-1019
Modeling Semantic Relations Expressed by Prepositions
This paper introduces the problem of predicting semantic relations expressed by prepositions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. We define an inventory of 32 relations, building on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. Given a preposition in a sentence, our computational task to jointly model the preposition relation and its arguments along with their semantic types, as a way to support the relation prediction. The annotated data, however, only provides labels for the relation label, and not the arguments and types. We address this by presenting two models for preposition relation labeling. Our generalization of latent structure SVM gives close to 90% accuracy on relation labeling. Further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the relation accuracy, but also significantly improve sense prediction accuracy.
false
[]
[]
null
null
null
The authors wish to thank Martha Palmer, Nathan Schneider, the anonymous reviewers and the editor for their valuable feedback. The authors gratefully acknowledge the support of the
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lin-etal-2006-information
https://aclanthology.org/N06-1059
An Information-Theoretic Approach to Automatic Evaluation of Summaries
null
false
[]
[]
null
null
null
null
2006
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
may-knight-2007-syntactic
https://aclanthology.org/D07-1038
Syntactic Re-Alignment Models for Machine Translation
We present a method for improving word alignment for statistical syntax-based machine translation that employs a syntactically informed alignment model closer to the translation model than commonly-used word alignment models. This leads to extraction of more useful linguistic patterns and improved BLEU scores on translation experiments in Chinese and Arabic.
false
[]
[]
null
null
null
We thank David Chiang, Steve DeNeefe, Alex Fraser, Victoria Fossum, Jonathan Graehl, Liang Huang, Daniel Marcu, Michael Pust, Oana Postolache, Michael Pust, Jason Riesa, Jens Vöckler, and Wei Wang for help and discussion. This research was supported by NSF (grant IIS-0428020) and DARPA (contract HR0011-06-C-0022).
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chiang-etal-2013-parsing
https://aclanthology.org/P13-1091
Parsing Graphs with Hyperedge Replacement Grammars
Hyperedge replacement grammar (HRG) is a formalism for generating and transforming graphs that has potential applications in natural language understanding and generation. A recognition algorithm due to Lautemann is known to be polynomial-time for graphs that are connected and of bounded degree. We present a more precise characterization of the algorithm's complexity, an optimization analogous to binarization of contextfree grammars, and some important implementation details, resulting in an algorithm that is practical for natural-language applications. The algorithm is part of Bolinas, a new software toolkit for HRG processing.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their helpful comments. This research was supported in part by ARO grant W911NF-10-1-0533.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
iordanskaja-etal-1992-generation
https://aclanthology.org/C92-3158
Generation of Extended Bilingual Statistical Reports
During tim past few years we liave been concerned with developing models for the automatic planning and realization of report texts wittlin technical sublanguages of English and French. Since 1987 we have been implementing Meaning-Text language models (MTMs) [6, 7] for the task of realizing sentences from semantic specifications that are output by a text planner. A relatively complete MTM implementation for English was tested in the domain of operating system audit summaries in tile Gossip project of 1987-89 [3] . At COLING-gO a report was given on the fully operational FoG system for generating marine forecasts in both English and French at weather centres in Eastern Canada [1] . The work reported on here concerns the experimental generation of extended bilingual summaries of Canadian statistical data. Our first focus has been on labour force surveys (LFS), where an extensive corpus of published reports in each language is available for empirical study. Tire current LFS system has built on the experience of the two preceding systems, but goes beyond either of them 1. Iu contrast to FoG, but similar to Gossip, LFS uses a semantic net representation of sentences as input to the realization process. Like Gossip, LFS also makes use of theme/theme constraints to help optimize lexical and syntactic choices during sentence realizatiou. But in contrast to Gossip, which produced only English texts, LFS is bilingual, making use of the conceptual level of representation produced by the planner as an interlingua from which to derive the linguistic semantic representations for texts in the two languages independently. Hence the LFS interlingua is much "deeper" than FoG's deep-syntactic interlingua. This allows us to iutroduce certain semantic differences between English and I,¥ench sentences that we observe in natural "translation twin" texts.
false
[]
[]
null
null
null
null
1992
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bianchi-etal-1993-undestanding
https://aclanthology.org/E93-1058
Undestanding Stories in Different Languages with GETA-RUN
null
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
roukos-1993-automatic
https://aclanthology.org/H93-1092
Automatic Extraction of Grammars From Annotated Text
"lhe primary objective of this project is to develop a robust, high-performance parser for English by automatically extracting a grammar from an annotated corpus of bracketed sentences, called the Treebank. The project is a collaboration between the IBM Continuous Speech Recognition Group and the University of Pennsylvania Department of Computer Sciences 1. Our initial focus is the domain of computer manuals with a vocabulary of 3000 words. We use a Treebank that was developed jointly by IBM and the University of Lancaster, England, during the past three years. We have an initial implementation of our parsing model where we used a simple set of features to guide us in our development of the approach. We used for training a Treebank of about 28,000 sentences. The parser's accuracy on a sample of 25 new sentences of length 7 to 17 words as judged, when compared to the Treebank, by three members of the group, is 52%. This is encouraging in light of the fact that we are in the process of increasing the features that the parser can look at. We give below a brief sketch of our approach.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2019-modeling
https://aclanthology.org/P19-1619
Modeling Intra-Relation in Math Word Problems with Different Functional Multi-Head Attentions
Several deep learning models have been proposed for solving math word problems (MWPs) automatically. Although these models have the ability to capture features without manual efforts, their approaches to capturing features are not specifically designed for MWPs. To utilize the merits of deep learning models with simultaneous consideration of MWPs' specific features, we propose a group attention mechanism to extract global features, quantity-related features, quantitypair features and question-related features in MWPs respectively. The experimental results show that the proposed approach performs significantly better than previous state-of-the-art methods, and boost performance from 66.9% to 69.5% on Math23K with training-test split, from 65.8% to 66.9% on Math23K with 5-fold cross-validation and from 69.2% to 76.1% on MAWPS.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
eyigoz-2010-tag
https://aclanthology.org/W10-4419
TAG Analysis of Turkish Long Distance Dependencies
All permutations of a two level embedding sentence in Turkish is analyzed, in order to develop an LTAG grammar that can account for Turkish long distance dependencies. The fact that Turkish allows only long distance topicalization and extraposition is shown to be connected to a condition-the coherence condition-that draws the boundary between the acceptable and inacceptable permutations of the five word sentence under investigation. The LTAG grammar for this fragment of Turkish has two levels: the first level assumes lexicalized and linguistically appropriate elementary trees, where as the second level assumes elementary trees that are derived from the elementary trees of the first level, and are not lexicalized.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-etal-2018-challenging
https://aclanthology.org/N18-1185
Challenging Reading Comprehension on Daily Conversation: Passage Completion on Multiparty Dialog
This paper presents a new corpus and a robust deep learning architecture for a task in reading comprehension, passage completion, on multiparty dialog. Given a dialog in text and a passage containing factual descriptions about the dialog where mentions of the characters are replaced by blanks, the task is to fill the blanks with the most appropriate character names that reflect the contexts in the dialog. Since there is no dataset that challenges the task of passage completion in this genre, we create a corpus by selecting transcripts from a TV show that comprise 1,681 dialogs, generating passages for each dialog through crowdsourcing, and annotating mentions of characters in both the dialog and the passages. Given this dataset, we build a deep neural model that integrates rich feature extraction from convolutional neural networks into sequence modeling in recurrent neural networks, optimized by utterance and dialog level attentions. Our model outperforms the previous state-of-the-art model on this task in a different genre using bidirectional LSTM, showing a 13.0+% improvement for longer dialogs. Our analysis shows the effectiveness of the attention mechanisms and suggests a direction to machine comprehension on multiparty dialog.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
raux-eskenazi-2004-non
https://aclanthology.org/N04-1028
Non-Native Users in the Let's Go!! Spoken Dialogue System: Dealing with Linguistic Mismatch
This paper describes the CMU Let's Go!! bus information system, an experimental system designed to study the use of spoken dialogue interfaces by non-native speakers. The differences in performance of the speech recognition and language understanding modules of the system when confronted with native and non-native spontaneous speech are analyzed. Focus is placed on the linguistic mismatch between the user input and the system's expectations, and on its implications in terms of language modeling and parsing performance. The effect of including non-native data when building the speech recognition and language understanding modules is discussed. In order to close the gap between non-native and native input, a method is proposed to automatically generate confirmation prompts that are both close to the user's input and covered by the system's language model and grammar, in order to help the user acquire idiomatic expressions appropriate to the task.
false
[]
[]
null
null
null
The authors would like to thank Alan W Black, Dan Bohus and Brian Langner for their help with this research.This material is based upon work supported by the U.S. National Science Foundation under Grant No. 0208835, "LET'S GO: improved speech interfaces for the general public". Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-liu-2010-cips
https://aclanthology.org/W10-4126
The CIPS-SIGHAN CLP2010 Chinese Word Segmentation Backoff
The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.
false
[]
[]
null
null
null
This work is supported by the National Natural Science Foundation of China (Grant No. 90920004). We gratefully acknowledge the generous assistance of the organizations listed below who provided the data and the Chinese word segmentation standard for this bakeoff; without their support, it could not have taken place:City University of Hong Kong Institute for Computational Linguistics, Beijing University, Beijing, ChinaWe thank Le Sun for his organization of the First CIPS-SIGHAN Joint Conference on Chinese Language Processing of which this bakeoff is part. We thank Lili Zhao for her segmentation-inconsistency-checking program and other preparation works for this bakeoff, and thank Guanghui Luo and Siyang Cao for the online scoring system they set up and maintained. We thank Yajuan Lü for her helpful suggestions on this paper. Finally we thank all the participants for their interest and hard work in making this bakeoff a success.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
blanco-sarabi-2016-automatic
https://aclanthology.org/N16-1169
Automatic Generation and Scoring of Positive Interpretations from Negated Statements
This paper presents a methodology to extract positive interpretations from negated statements. First, we automatically generate plausible interpretations using well-known grammar rules and manipulating semantic roles. Second, we score plausible alternatives according to their likelihood. Manual annotations show that the positive interpretations are intuitive to humans, and experimental results show that the scoring task can be automated.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rosenberg-2010-classification
https://aclanthology.org/N10-1109
Classification of Prosodic Events using Quantized Contour Modeling
We present Quantized Contour Modeling (QCM), a Bayesian approach to the classification of acoustic contours. We evaluate the performance of this technique in the classification of prosodic events. We find that, on BURNC, this technique can successfully classify pitch accents with 63.99% accuracy (.4481 CER), and phrase ending tones with 72.91% accuracy.
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
brasoveanu-etal-2020-media
https://aclanthology.org/2020.conll-1.28
In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works
Annotation styles express guidelines that direct human annotators by explicitly stating the rules to follow when creating gold standard annotations of text corpora. These guidelines not only shape the gold standards they help create, but also influence the training and evaluation of Named Entity Linking (NEL) tools, since different annotation styles correspond to divergent views on the entities present in a document. Such divergence is particularly relevant for texts from the media domain containing references to creative works. This paper presents a corpus of 1000 annotated documents from sources such as Wikipedia, TVTropes and WikiNews that are organized in ten partitions. Each document contains multiple gold standard annotations representing various annotation styles. The corpus is used to evaluate a series of Named Entity Linking tools in order to understand the impact of the differences in annotation styles on the reported accuracy when processing highly ambiguous entities such as names of creative works. Relaxed annotation guidelines that include overlap styles, for instance, lead to better results across all tools.
false
[]
[]
null
null
null
This research has been partially funded through the following projects: the ReTV project (www.retvproject.eu) funded by the European Union's Horizon 2020 Research and Innovation Programme (No. 780656), and MedMon (www.fhgr.ch/medmon) funded by the Swiss Innovation Agency Innosuisse.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jimenez-gutierrez-etal-2020-document
https://aclanthology.org/2020.findings-emnlp.332
Document Classification for COVID-19 Literature
The global pandemic has made it more important than ever to quickly and accurately retrieve relevant scientific literature for effective consumption by researchers in a wide range of fields. We provide an analysis of several multi-label document classification models on the LitCovid dataset, a growing collection of 23,000 research papers regarding the novel 2019 coronavirus. We find that pre-trained language models fine-tuned on this dataset outperform all other baselines and that BioBERT surpasses the others by a small margin with micro-F1 and accuracy scores of around 86% and 75% respectively on the test set. We evaluate the data efficiency and generalizability of these models as essential features of any system prepared to deal with an urgent situation like the current health crisis. We perform a data ablation study to determine how important article titles are for achieving reasonable performance on this dataset. Finally, we explore 50 errors made by the best performing models on LitCovid documents and find that they often (1) correlate certain labels too closely together and (2) fail to focus on discriminative sections of the articles; both of which are important issues to address in future work. Both data and code are available on GitHub 1 .
true
[]
[]
Good Health and Well-Being
null
null
This research was sponsored in part by the Ohio Supercomputer Center (Center, 1987) . The authors would also like to thank Lang Li and Tanya Berger-Wolf for helpful discussions.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhao-etal-2021-efficient
https://aclanthology.org/2021.emnlp-main.354
Efficient Dialogue Complementary Policy Learning via Deep Q-network Policy and Episodic Memory Policy
Deep reinforcement learning has shown great potential in training dialogue policies. However, its favorable performance comes at the cost of many rounds of interaction. Most of the existing dialogue policy methods rely on a single learning system, while the human brain has two specialized learning and memory systems, supporting to find good solutions without requiring copious examples. Inspired by the human brain, this paper proposes a novel complementary policy learning (CPL) framework, which exploits the complementary advantages of the episodic memory (EM) policy and the deep Q-network (DQN) policy to achieve fast and effective dialogue policy learning. In order to coordinate between the two policies, we proposed a confidence controller to control the complementary time according to their relative efficacy at different stages. Furthermore, memory connectivity and time pruning are proposed to guarantee the flexible and adaptive generalization of the EM policy in dialog tasks. Experimental results on three dialogue datasets show that our method significantly outperforms existing methods relying on a single learning system.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rios-kavuluru-2018-emr
https://aclanthology.org/N18-1189
EMR Coding with Semi-Parametric Multi-Head Matching Networks
Coding EMRs with diagnosis and procedure codes is an indispensable task for billing, secondary data analyses, and monitoring health trends. Both speed and accuracy of coding are critical. While coding errors could lead to more patient-side financial burden and misinterpretation of a patient's well-being, timely coding is also needed to avoid backlogs and additional costs for the healthcare facility. In this paper, we present a new neural network architecture that combines ideas from few-shot learning matching networks, multi-label loss functions, and convolutional neural networks for text classification to significantly outperform other state-of-the-art models. Our evaluations are conducted using a well known deidentified EMR dataset (MIMIC) with a variety of multi-label performance measures.
false
[]
[]
null
null
null
Thanks to anonymous reviewers for their thorough reviews and constructive criticism that helped improve the clarity of the paper (especially leading to the addition of Section 3.5 in the revision). This research is supported by the U.S. National Library of Medicine through grant R21LM012274. We also gratefully acknowledge the support of the NVIDIA Corporation for its donation of the Titan X Pascal GPU used for this research.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dsouza-ng-2014-ensemble
https://aclanthology.org/C14-1159
Ensemble-Based Medical Relation Classification
Despite the successes of distant supervision approaches to relation extraction in the news domain, the lack of a comprehensive ontology of medical relations makes it difficult to apply such approaches to relation classification in the medical domain. In light of this difficulty, we propose an ensemble approach to this task where we exploit human-supplied knowledge to guide the design of members of the ensemble. Results on the 2010 i2b2/VA Challenge corpus show that our ensemble approach yields a 19.8% relative error reduction over a state-of-the-art baseline.
true
[]
[]
Good Health and Well-Being
null
null
We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of this paper. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142.
2014
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bateman-1999-using
https://aclanthology.org/P99-1017
Using aggregation for selecting content when generating referring expressions
Previous algorithms for the generation of referring expressions have been developed specifically for this purpose. Here we introduce an alternative approach based on a fully generic aggregation method also motivated for other generation tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the context of complete noun phrase generation.
false
[]
[]
null
null
null
This paper was improved by the anonymous comments of reviewers for both the ACL and the European Natural Language Generation Workshop (1999). Remaining errors and obscurities are my own.
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hautli-butt-2011-towards
https://aclanthology.org/W11-3412
Towards a Computational Semantic Analyzer for Urdu
This paper describes a first approach to a computational semantic analyzer for Urdu on the basis of the deep syntactic analysis done by the Urdu grammar ParGram. Apart from the semantic construction, external lexical resources such as an Urdu WordNet and a preliminary VerbNet style resource for Urdu are developed and connected to the semantic analyzer. These resources allow for a deeper level of representation by providing real-word knowledge such as hypernyms of lexical entities and information on thematic roles. We therefore contribute to the overall goal of providing more insights into the computationally efficient analysis of Urdu, in particular to computational semantic analysis.
false
[]
[]
null
null
null
null
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
servan-etal-2012-liums
https://aclanthology.org/W12-3147
LIUM's SMT Machine Translation Systems for WMT 2012
This paper describes the development of French-English and English-French statistical machine translation systems for the 2012 WMT shared task evaluation. We developed phrase-based systems based on the Moses decoder, trained on the provided data only. Additionally, new features this year included improved language and translation model adaptation using the cross-entropy score for the corpus selection.
false
[]
[]
null
null
null
This work has been partially funded by the European Union under the EuroMatrixPlus project ICT-2007.2.2-FP7-231720 and the French government under the ANR project COSMAT ANR-09-CORD-004.
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fernandez-gonzalez-martins-2015-parsing
https://aclanthology.org/P15-1147
Parsing as Reduction
We reduce phrase-based parsing to dependency parsing. Our reduction is grounded on a new intermediate representation, "head-ordered dependency trees," shown to be isomorphic to constituent trees. By encoding order information in the dependency labels, we show that any off-theshelf, trainable dependency parser can be used to produce constituents. When this parser is non-projective, we can perform discontinuous parsing in a very natural manner. Despite the simplicity of our approach, experiments show that the resulting parsers are on par with strong baselines, such as the Berkeley parser for English and the best non-reranking system in the SPMRL-2014 shared task. Results are particularly striking for discontinuous parsing of German, where we surpass the current state of the art by a wide margin. * This research was carried out during an internship at Priberam Labs.
false
[]
[]
null
null
null
We would like to thank the three reviewers for their insightful comments, and Slav Petrov, Djamé Seddah, Yannick Versley, David Hall, Muhua Zhu, Lingpeng Kong, Carlos Gómez-Rodríguez, and Andreas van Cranenburgh for valuable feedback and help in preparing data and running software code. This research has been partially funded by the Spanish Ministry of Economy and Competitiveness and FEDER (project TIN2010-18552-C03-01), Ministry of Education (FPU Grant Program) and Xunta de Galicia (projects R2014/029 and R2014/034). A. M. was supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803), and by the FCT grants UID/EEA/50008/2013 and PTDC/EEI-SII/2312/2012.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
caucheteux-etal-2021-model-based
https://aclanthology.org/2021.findings-emnlp.308
Model-based analysis of brain activity reveals the hierarchy of language in 305 subjects
A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e.g. regular speech versus scrambled words, sentences, or paragraphs). Although successful, this 'model-free' approach necessitates the acquisition of a large and costly set of neuroimaging data. Here, we show that a model-based approach can reach equivalent results within subjects exposed to natural stimuli. We capitalize on the recentlydiscovered similarities between deep language models and the human brain to compute the mapping between i) the brain responses to regular speech and ii) the activations of deep language models elicited by modified stimuli (e.g. scrambled words, sentences, or paragraphs). Our model-based approach successfully replicates the seminal study of (Lerner et al., 2011), which revealed the hierarchy of language areas by comparing the functional-magnetic resonance imaging (fMRI) of seven subjects listening to 7 min of both regular and scrambled narratives. We further extend and precise these results to the brain signals of 305 individuals listening to 4.1 hours of narrated stories. Overall, this study paves the way for efficient and flexible analyses of the brain bases of language.
false
[]
[]
null
null
null
This work was supported by the French ANR-20-CHIA-0016 and the European Research Council Starting Grant SLAB ERC-YStG-676943 to AG, and by the French ANR-17-EURE-0017 and the Fyssen Foundation to JRK for his work at PSL.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
rao-etal-2017-scalable
https://aclanthology.org/W17-7547
Scalable Bio-Molecular Event Extraction System towards Knowledge Acquisition
This paper presents a robust system for the automatic extraction of bio-molecular events from scientific texts. Event extraction provides information in the understanding of physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, such as knowledge base creation, knowledge discovery. Automatic event extraction is a challenging task due to ambiguity and diversity of natural language and linguistic phenomena, such as negations, anaphora and coreferencing leading to incorrect interpretation. In this work a machine learning based approach has been used for the event extraction. The methodology framework proposed in this work is derived from the perspective of natural language processing. The system includes a robust anaphora and coreference resolution module, developed as part of this work. An overall F-score of 54.25% is obtained, which is an improvement of 4% in comparison with the state of the art systems.
true
[]
[]
Good Health and Well-Being
null
null
null
2017
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yu-etal-2016-product
https://aclanthology.org/C16-1106
Product Review Summarization by Exploiting Phrase Properties
We propose a phrase-based approach for generating product review summaries. The main idea of our method is to leverage phrase properties to choose a subset of optimal phrases for generating the final summary. Specifically, we exploit two phrase properties, popularity and specificity. Popularity describes how popular the phrase is in the original reviews. Specificity describes how descriptive a phrase is in comparison to generic comments. We formalize the phrase selection procedure as an optimization problem and solve it using integer linear programming (ILP). An aspect-based bigram language model is used for generating the final summary with the selected phrases. Experiments show that our summarizer outperforms the other baselines.
false
[]
[]
null
null
null
This work was partly supported by the National Basic Research Program (973 Program
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
adorni-massone-1984-production
https://aclanthology.org/1984.bcs-1.16
Production of sentences: a general algorithm and a case study
In this paper a procedure for the production of sentences is described, producing written sentences in a particular language starting from formal representations of their meaning. After a brief description of the internal representation used, the algorithm is presented, and some results and future trends are discussed.
false
[]
[]
null
null
null
Authors wish to thank Domenico Parisi and Alessandra Giorgi for their helpful comments and discussion, leading to the implementation of the algorithm.
1984
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ni-etal-2019-justifying
https://aclanthology.org/D19-1018
Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects
Several recent works have considered the problem of generating reviews (or 'tips') as a form of explanation as to why a recommendation might match a user's interests. While promising, we demonstrate that existing approaches struggle (in terms of both quality and content) to generate justifications that are relevant to users' decision-making process. We seek to introduce new datasets and methods to address this recommendation justification task. In terms of data, we first propose an 'extractive' approach to identify review segments which justify users' intentions; this approach is then used to distantly label massive review corpora and construct largescale personalized recommendation justification datasets. In terms of generation, we design two personalized generation models with this data: (1) a reference-based Seq2Seq model with aspect-planning which can generate justifications covering different aspects, and (2) an aspect-conditional masked language model which can generate diverse justifications based on templates extracted from justification histories. We conduct experiments on two real-world datasets which show that our model is capable of generating convincing and diverse justifications.
false
[]
[]
null
null
null
Acknowledgements. This work is partly supported by NSF #1750063. We thank all the reviewers for their constructive suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2021-retrack
https://aclanthology.org/2021.acl-demo.39
ReTraCk: A Flexible and Efficient Framework for Knowledge Base Question Answering
We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve the transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard 1 and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework. 2 * The first three authors contributed equally. This work was conducted during Shuang and Qian's internship at Microsoft Research Asia.
false
[]
[]
null
null
null
We would like to thank Audrey Lin and Börje F. Karlsson for their constructive comments and useful suggestions, and all the anonymous reviewers for their helpful feedback. We also thank Yu Gu for evaluating our submissions on the test set of the GrailQA benchmark and sharing preprocessed data on GrailQA and WebQuestionsSP.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
thieberger-2007-language
https://aclanthology.org/U07-1002
Does Language Technology Offer Anything to Small Languages?
The effort currently going into recording the smaller and perhaps more endangered languages of the world may result in computationally tractable documents in those languages, but to date there has not been a tradition of corpus creation for these languages. In this talk I will outline the language situation of Australia's neighbouring region and discuss methods currently used in language documentation, observing that it is quite difficult to get linguists to create reusable records of the languages they record, let alone expecting them to create marked-up corpora. I will highlight the importance of creating shared infrastructure to support our work, including the development of Pacific and Regional Archive for Digital Sources in Endangered Cultures (PARADISEC), a facility for curation of linguistic data.
true
[]
[]
Reduced Inequalities
null
null
null
2007
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
kishimoto-etal-2014-post
https://aclanthology.org/2014.amta-wptp.15
Post-editing user interface using visualization of a sentence structure
Translation has become increasingly important by virtue of globalization. To reduce the cost of translation, it is necessary to use machine translation and further to take advantage of post-editing based on the result of a machine translation for accurate information dissemination. Such post-editing (e.g., PET [Aziz et al., 2012] ) can be used practically for translation between European languages, which has a high performance in statistical machine translation. However, due to the low accuracy of machine translation between languages with different word order, such as Japanese-English and Japanese-Chinese, post-editing has not been used actively.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
el-haj-etal-2018-profiling
https://aclanthology.org/L18-1726
Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger
In many areas of academic publishing, there is an explosion of literature, and subdivision of fields into subfields, leading to stove-piping where sub-communities of expertise become disconnected from each other. This is especially true in the genetics literature over the last 10 years where researchers are no longer able to maintain knowledge of previously related areas. This paper extends several approaches based on natural language processing and corpus linguistics which allow us to examine corpora derived from bodies of genetics literature and will help to make comparisons and improve retrieval methods using domain knowledge via an existing gene ontology. We derived two open access medical journal corpora from PubMed related to psychiatric genetics and immune disorder genetics. We created a novel Gene Ontology Semantic Tagger (GOST) and lexicon to annotate the corpora and are then able to compare subsets of literature to understand the relative distributions of genetic terminology, thereby enabling researchers to make improved connections between them.
true
[]
[]
Good Health and Well-Being
null
null
null
2018
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kolak-etal-2003-generative
https://aclanthology.org/N03-1018
A Generative Probabilistic OCR Model for NLP Applications
In this paper, we introduce a generative probabilistic optical character recognition (OCR) model that describes an end-to-end process in the noisy channel framework, progressing from generation of true text through its transformation into the noisy output of an OCR system. The model is designed for use in error correction, with a focus on post-processing the output of black-box OCR systems in order to make it more useful for NLP tasks. We present an implementation of the model based on finitestate models, demonstrate the model's ability to significantly reduce character and word error rate, and provide evaluation results involving automatic extraction of translation lexicons from printed text.
false
[]
[]
null
null
null
This research was supported in part by National Science Foundation grant EIA0130422, Department of Defense contract RD-02-5700, DARPA/ITO Cooperative Agreement N660010028910, and Mitre agreement 010418-7712.We are grateful to Mohri et al. for the AT&T FSM Toolkit, Clarkson and Rosenfeld for CMU-Cambridge Toolkit, and David Doermann for providing the OCR output and useful discussion.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lohk-etal-2016-experiences
https://aclanthology.org/2016.gwc-1.28
Experiences of Lexicographers and Computer Scientists in Validating Estonian Wordnet with Test Patterns
New concepts and semantic relations are constantly added to Estonian Wordnet (EstWN) to increase its size. In addition to this, with the use of test patterns, the validation of EstWN hierarchies is also performed. This parallel work was carried out over the past four years (2011-2014) with 10 different EstWN versions (60-70). This has been a collaboration between the creators of test patterns and the lexicographers currently working on EstWN. This paper describes the usage of test patterns from the points of views of information scientists (the creators of test patterns) as well as the users (lexicographers). Using EstWN as an example, we illustrate how the continuous use of test patterns has led to significant improvement of the semantic hierarchies in EstWN.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gao-etal-2021-ream
https://aclanthology.org/2021.findings-acl.220
REAM$\sharp$: An Enhancement Approach to Reference-based Evaluation Metrics for Open-domain Dialog Generation
The lack of reliable automatic evaluation metrics is a major impediment to the development of open-domain dialogue systems. Various reference-based metrics have been proposed to calculate a score between a predicted response and a small set of references. However, these metrics show unsatisfactory correlations with human judgments. For a referencebased metric, its reliability mainly depends on two factors: its ability to measure the similarity between the predicted response and the reference response, as well as the reliability of the given reference set. Yet, there are few discussions on the latter. Our work attempts to fill this vacancy. We first clarify an assumption on reference-based metrics that, if more high-quality references are added into the reference set, the reliability of the metric will increase. Next, we present REAM : an enhancement approach to Reference-based EvAluation Metrics 1 for open-domain dialogue systems. A prediction model is designed to estimate the reliability of the given reference set. We show how its predicted results can be helpful to augment the reference set, and thus improve the reliability of the metric. Experiments validate both the effectiveness of our prediction model and that the reliability of reference-based metrics improves with the augmented reference sets.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mari-2002-specification
https://aclanthology.org/W02-0803
Under-specification and contextual variability of abstract
In this paper we discuss some philosophical questions related to the treatment of abstract and underspecified prepositions. We consider three issues in particular: (i) the relation between sense and meanings, (ii) the privileged status of abstract meanings in the spectrum of contextual instantiations of basic senses, and finally (iii) the difference between prediction and inference. The discussion will be based on the study of avec (with) and the analysis of its abstract meaning of comitativity in particular. A model for avec semantic variability will also be suggested.
false
[]
[]
null
null
null
Acknowledgments Many thanks to Patrick Saint-Dizier and Jacques Jayez for their careful readings and useful suggestions.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kannan-santhi-ponnusamy-2020-tukapo
https://aclanthology.org/2020.semeval-1.95
T\"uKaPo at SemEval-2020 Task 6: Def(n)tly Not BERT: Definition Extraction Using pre-BERT Methods in a post-BERT World
We describe our system (TüKaPo) submitted for Task 6: DeftEval, at SemEval 2020. We developed a hybrid approach that combined existing CNN and RNN methods and investigated the impact of purely-syntactic and semantic features on the task of definition extraction, i.e, sentence classification. Our final model achieved a F1-score of 0.6851 in the first subtask.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lindahl-2020-annotating
https://aclanthology.org/2020.argmining-1.11
Annotating argumentation in Swedish social media
This paper presents a small study of annotating argumentation in Swedish social media. Annotators were asked to annotate spans of argumentation in 9 threads from two discussion forums. At the post level, Cohen's κ and Krippendorff's α 0.48 was achieved. When manually inspecting the annotations the annotators seemed to agree when conditions in the guidelines were explicitly met, but implicit argumentation and opinions, resulting in annotators having to interpret what's missing in the text, caused disagreements.
false
[]
[]
null
null
null
The work presented here has been partly supported by an infrastructure grant to Språkbanken Text, University of Gothenburg, for contributing to building and operating a national e-infrastructure funded jointly by the participating institutions and the Swedish Research Council (under contract no. 2017-00626). We would also like to thank the anonymous reviewers for their constructive comments and feedback.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yan-pedersen-2017-duluth
https://aclanthology.org/S17-2064
Duluth at SemEval-2017 Task 6: Language Models in Humor Detection
This paper describes the Duluth system that participated in SemEval-2017 Task 6 #HashtagWars: Learning a Sense of Humor. The system participated in Subtasks A and B using N-gram language models, ranking highly in the task evaluation. This paper discusses the results of our system in the development and evaluation stages and from two post-evaluation runs.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-2021-codewithzichao
https://aclanthology.org/2021.dravidianlangtech-1.21
Codewithzichao@DravidianLangTech-EACL2021: Exploring Multilingual Transformers for Offensive Language Identification on Code Mixing Text
This paper describes our solution submitted to shared task on Offensive Language Identification in Dravidian Languages. We participated in all three of offensive language identification. In order to address the task, we explored multilingual models based on XLM-RoBERTa and multilingual BERT trained on mixed data of three code-mixed languages. Besides, we solved the class-imbalance problem existed in training data by class combination, class weights and focal loss. Our model achieved weighted average F1 scores of 0.75 (ranked 4th), 0.94 (ranked 4th) and 0.72 (ranked 3rd) in Tamil-English task, Malayalam-English task and Kannada-English task, respectively.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
shinnou-1998-revision
https://link.springer.com/chapter/10.1007/3-540-49478-2_36
Revision of morphological analysis errors through the person name construction model
null
false
[]
[]
null
null
null
null
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
finegan-dollak-verma-2020-layout
https://aclanthology.org/2020.insights-1.9
Layout-Aware Text Representations Harm Clustering Documents by Type
Clustering documents by type-grouping invoices with invoices and articles with articles-is a desirable first step for organizing large collections of document scans. Humans approaching this task use both the semantics of the text and the document layout to assist in grouping like documents. Lay-outLM (Xu et al., 2019), a layout-aware transformer built on top of BERT with state-of-theart performance on document-type classification, could reasonably be expected to outperform regular BERT (Devlin et al., 2018) for document-type clustering. However, we find experimentally that BERT significantly outperforms LayoutLM on this task (p < 0.001). We analyze clusters to show where layout awareness is an asset and where it is a liability.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their helpful comments, as well as Anik Saha for many discussions on LayoutLM's strengths and weaknesses for supervised tasks.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
khusainova-etal-2021-hierarchical
https://aclanthology.org/2021.vardial-1.2
Hierarchical Transformer for Multilingual Machine Translation
The choice of parameter sharing strategy in multilingual machine translation models determines how optimally parameter space is used and hence, directly influences ultimate translation quality. Inspired by linguistic trees that show the degree of relatedness between different languages, the new general approach to parameter sharing in multilingual machine translation was suggested recently. The main idea is to use these expert language hierarchies as a basis for multilingual architecture: the closer two languages are, the more parameters they share. In this work, we test this idea using the Transformer architecture and show that despite the success in previous work there are problems inherent to training such hierarchical models. We demonstrate that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-collins-2018-noise
https://aclanthology.org/D18-1405
Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency
Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for loglinear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the rankingbased variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods.
false
[]
[]
null
null
null
The authors thank Emily Pitler and Ali Elkahky for many useful conversations about the work, and David Weiss for comments on an earlier draft of the paper.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
king-1980-human
https://aclanthology.org/C80-1042
Human Factors and Linguistic Considerations: Keys to High-Speed Chinese Character Input
With a keyboard and supporting system developed at Cornell University, input methods used to identify ideographs are adaptations of wellknown schemes; innovation is in the addition of automatic machine selection of ambiguously identified characters. The unique feature of the Cornell design is that a certain amount of intelligence has been built into the machine. This allows an operator to take advantage of the fact that about 60% of Chinese characters in text are paired with other characters to form two-syllable compounds or phrase words. In speech and writing these pairings eliminate about 95% of the ambiguities created by ambiguously identified syllables.
false
[]
[]
null
null
null
The work on which this paper was based received support from the NCR Corporation.
1980
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ma-way-2010-hmm
https://aclanthology.org/W10-3813
HMM Word-to-Phrase Alignment with Dependency Constraints
In this paper, we extend the HMM wordto-phrase alignment model with syntactic dependency constraints. The syntactic dependencies between multiple words in one language are introduced into the model in a bid to produce coherent alignments. Our experimental results on a variety of Chinese-English data show that our syntactically constrained model can lead to as much as a 3.24% relative improvement in BLEU score over current HMM word-to-phrase alignment models on a Phrase-Based Statistical Machine Translation system when the training data is small, and a comparable performance compared to IBM model 4 on a Hiero-style system with larger training data. An intrinsic alignment quality evaluation shows that our alignment model with dependency constraints leads to improvements in both precision (by 1.74% relative) and recall (by 1.75% relative) over the model without dependency information.
false
[]
[]
null
null
null
This research is supported by the Science Foundation Ireland (Grant 07/CE/I1142) as part of the Centre for Next Generation Localisation (www.cngl.ie) at Dublin City University. Part of the work was carried out at Cambridge University Engineering Department with Dr. William Byrne. The authors would also like to thank the anonymous reviewers for their insightful comments.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2019-long
https://aclanthology.org/N19-1306
Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks
We propose a distance supervised relation extraction approach for long-tailed, imbalanced data which is prevalent in real-world settings. Here, the challenge is to learn accurate "fewshot" models for classes existing at the tail of the class distribution, for which little data is available. Inspired by the rich semantic correlations between classes at the long tail and those at the head, we take advantage of the knowledge from data-rich classes at the head of the distribution to boost the performance of the data-poor classes at the tail. First, we propose to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks. Second, we integrate that relational knowledge into relation extraction model by coarse-tofine knowledge-aware attention mechanism. We demonstrate our results for a large-scale benchmark dataset which show that our approach significantly outperforms other baselines, especially for long-tail relations.
false
[]
[]
null
null
null
We want to express gratitude to the anonymous reviewers for their hard work and kind comments, which will further improve our work in the future.This work is funded by NSFC91846204/61473260, national key research program YS2018YFB140004, Alibaba CangJingGe (Knowledge Engine) Research Plan and Natural Science Foundation of Zhejiang Province of China (LQ19F030001).
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
braune-fraser-2010-improved
https://aclanthology.org/C10-2010
Improved Unsupervised Sentence Alignment for Symmetrical and Asymmetrical Parallel Corpora
We address the problem of unsupervised and language-pair independent alignment of symmetrical and asymmetrical parallel corpora. Asymmetrical parallel corpora contain a large proportion of 1-to-0/0-to-1 and 1-to-many/many-to-1 sentence correspondences. We have developed a novel approach which is fast and allows us to achieve high accuracy in terms of F 1 for the alignment of both asymmetrical and symmetrical parallel corpora. The source code of our aligner and the test sets are freely available.
false
[]
[]
null
null
null
The first author was partially supported by the Hasler Stiftung 19 . Support for both authors was provided by Deutsche Forschungsgemeinschaft grants Models of Morphosyntax for Statistical Machine Translation and SFB 732.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guibon-etal-2021-shot
https://aclanthology.org/2021.emnlp-main.549
Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks
Several recent studies on dyadic humanhuman interactions have been done on conversations without specific business objectives. However, many companies might benefit from studies dedicated to more precise environments such as after sales services or customer satisfaction surveys. In this work, we place ourselves in the scope of a live chat customer service in which we want to detect emotions and their evolution in the conversation flow. This context leads to multiple challenges that range from exploiting restricted, small and mostly unlabeled datasets to finding and adapting methods for such context. We tackle these challenges by using Few-Shot Learning while making the hypothesis it can serve conversational emotion classification for different languages and sparse labels. We contribute by proposing a variation of Prototypical Networks for sequence labeling in conversation that we name ProtoSeq. We test this method on two datasets with different languages: daily conversations in English and customer service chat conversations in French. When applied to emotion classification in conversations, our method proved to be competitive even when compared to other ones. The code for Proto-Seq is available at https://github.com/ gguibon/ProtoSeq.
false
[]
[]
null
null
null
This project has received funding from SNCF, the French National Research Agency's grant ANR-17-MAOI and the DSAIDIS chair at Télécom-Paris.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
sazzed-2020-cross
https://aclanthology.org/2020.wnut-1.8
Cross-lingual sentiment classification in low-resource Bengali language
Sentiment analysis research in low-resource languages such as Bengali is still unexplored due to the scarcity of annotated data and the lack of text processing tools. Therefore, in this work, we focus on generating resources and showing the applicability of the crosslingual sentiment analysis approach in Bengali. For benchmarking, we created and annotated a comprehensive corpus of around 12000 Bengali reviews. To address the lack of standard text-processing tools in Bengali, we leverage resources from English utilizing machine translation. We determine the performance of supervised machine learning (ML) classifiers in machine-translated English corpus and compare it with the original Bengali corpus. Besides, we examine sentiment preservation in the machine-translated corpus utilizing Cohen's Kappa and Gwet's AC1. To circumvent the laborious data labeling process, we explore lexicon-based methods and study the applicability of utilizing cross-domain labeled data from the resource-rich language. We find that supervised ML classifiers show comparable performances in Bengali and machinetranslated English corpus. By utilizing labeled data, they achieve 15%-20% higher F1 scores compared to both lexicon-based and transfer learning-based methods. Besides, we observe that machine translation does not alter the sentiment polarity of the review for most of the cases. Our experimental results demonstrate that the machine translation based crosslingual approach can be an effective way for sentiment classification in Bengali.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
suzuki-kumano-2005-learning
https://aclanthology.org/2005.mtsummit-papers.5
Learning Translations from Monolingual Corpora
This paper proposes a method for a machine translation (MT) system to automatically select and learn translation words, which suit the user's tastes or document fields by using a monolingual corpus manually compiled by the user, in order to achieve high-quality translation. We have constructed a system based on this method and carried out experiments to prove the validity of the proposed method. This learning system has been implemented in Toshiba's "The Honyaku" series.
false
[]
[]
null
null
null
null
2005
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
cassell-etal-2001-non
https://aclanthology.org/P01-1016
Non-Verbal Cues for Discourse Structure
This paper addresses the issue of designing embodied conversational agents that exhibit appropriate posture shifts during dialogues with human users. Previous research has noted the importance of hand gestures, eye gaze and head nods in conversations between embodied agents and humans. We present an analysis of human monologues and dialogues that suggests that postural shifts can be predicted as a function of discourse state in monologues, and discourse and conversation state in dialogues. On the basis of these findings, we have implemented an embodied conversational agent that uses Collagen in such a way as to generate postural shifts.
false
[]
[]
null
null
null
This research was supported by MERL, France Telecom, AT&T, and the other generous sponsors of the MIT Media Lab. Thanks to the other members of the Gesture and Narrative Language Group, in particular Ian Gouldstone and Hannes Vilhjálmsson.
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
klein-etal-2002-robust
https://aclanthology.org/C02-2017
Robust Interpretation of User Requests for Text Retrieval in a Multimodal Environment
We describe a parser for robust and flexible interpretation of user utterances in a multi-modal system for web search in newspaper databases. Users can speak or type, and they can navigate and follow links using mouse clicks. Spoken or written queries may combine search expressions with browser commands and search space restrictions. In interpreting input queries, the system has to be fault-tolerant to account for spontanous speech phenomena as well as typing or speech recognition errors which often distort the meaning of the utterance and are difficult to detect and correct. Our parser integrates shallow parsing techniques with knowledge-based text retrieval to allow for robust processing and coordination of input modes. Parsing relies on a two-layered approach: typical meta-expressions like those concerning search, newspaper types and dates are identified and excluded from the search string to be sent to the search engine. The search terms which are left after preprocessing are then grouped according to co-occurrence statistics which have been derived from a newspaper corpus. These co-occurrence statistics concern typical noun phrases as they appear in newspaper texts.
false
[]
[]
null
null
null
This work was supported by the Austrian Science Fund (FWF) under project number P-13704. Financial support forÖFAI is provided by the Austrian Federal Ministry of Education, Science and Culture.
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nikoulina-etal-2012-hybrid
https://aclanthology.org/W12-5701
Hybrid Adaptation of Named Entity Recognition for Statistical Machine Translation
Appropriate Named Entity handling is important for Statistical Machine Translation. In this work we address the challenging issues of generalization and sparsity of NEs in the context of SMT. Our approach uses the source NE Recognition (NER) system to generalize the training data by replacing the recognized Named Entities with place-holders, thus allowing a Phrase-Based Statistical Machine Translation (PBMT) system to learn more general patterns. At translation time, the recognized Named Entities are handled through a specifically adapted translation model, which improves the quality of their translation. We add a post-processing step to a standard NER system in order to make it more suitable for integration with SMT and we also learn a prediction model for deciding between options for translating the Named Entities, based on their context and on their impact on the translation of the entire sentence. We show important improvements in terms of BLEU and TER scores already after integration of NER into SMT, but especially after applying the SMT-adapted post-processing step to the NER component.
false
[]
[]
null
null
null
This work was partially supported by the Organic.Lingua project (http://www.organiclingua.eu/), funded by the European Commission under the ICT Policy Support Programme (ICT PSP).
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
griesshaber-etal-2020-fine
https://aclanthology.org/2020.coling-main.100
Fine-tuning BERT for Low-Resource Natural Language Understanding via Active Learning
Recently, leveraging pre-trained Transformer based language models in down stream, task specific models has advanced state of the art results in natural language understanding tasks. However, only a little research has explored the suitability of this approach in low resource settings with less than 1,000 training data points. In this work, we explore fine-tuning methods of BERT-a pre-trained Transformer based language model-by utilizing pool-based active learning to speed up training while keeping the cost of labeling new data constant. Our experimental results on the GLUE data set show an advantage in model performance by maximizing the approximate knowledge gain of the model when querying from the pool of unlabeled data. Finally, we demonstrate and analyze the benefits of freezing layers of the language model during fine-tuning to reduce the number of trainable parameters, making it more suitable for low-resource settings.
false
[]
[]
null
null
null
This research and development project is funded within the "Future of Work" Program by the German Federal Ministry of Education and Research (BMBF) and the European Social Fund in Germany. It is implemented by the Project Management Agency Karlsruhe (PTKA). The authors are responsible for the content of this publication.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
prazak-konopik-2019-ulsana
https://aclanthology.org/R19-1112
ULSAna: Universal Language Semantic Analyzer
We present a live cross-lingual system capable of producing shallow semantic annotations of natural language sentences for 51 languages at this time. The domain of the input sentences is in principle unconstrained. The system uses single training data (in English) for all the languages. The resulting semantic annotations are therefore consistent across different languages. We use CoNLL Semantic Role Labeling training data and Universal dependencies as the basis for the system. The system is publicly available and supports processing data in batches; therefore, it can be easily used by the community for research tasks.
false
[]
[]
null
null
null
This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports and by Grant No. SGS-2019-018 Processing of heterogeneous data and its specialized applications. Computational resources were provided by the CESNET LM2015042 and
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
hsieh-etal-2017-monpa
https://aclanthology.org/I17-2014
MONPA: Multi-objective Named-entity and Part-of-speech Annotator for Chinese using Recurrent Neural Network
Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places extra burden on those who deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an endto-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER competition datasets show that a single joint model using the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.
false
[]
[]
null
null
null
We are grateful for the constructive comments from three anonymous reviewers. This work was supported by grant MOST106-3114-E-001-002 from the Ministry of Science and Technology, Taiwan.
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
liu-2003-word
https://aclanthology.org/N03-3007
Word Fragments Identification Using Acoustic-Prosodic Features in Conversational Speech
Word fragments pose serious problems for speech recognizers. Accurate identification of word fragments will not only improve recognition accuracy, but also be very helpful for disfluency detection algorithm because the occurrence of word fragments is a good indicator of speech disfluencies. Different from the previous effort of including word fragments in the acoustic model, in this paper, we investigate the problem of word fragment identification from another approach, i.e. building classifiers using acoustic-prosodic features. Our experiments show that, by combining a few voice quality measures and prosodic features extracted from the forced alignments with the human transcriptions, we obtain a precision rate of 74.3% and a recall rate of 70.1% on the downsampled data of spontaneous speech. The overall accuracy is 72.9%, which is significantly better than chance performance of 50%.
false
[]
[]
null
null
null
The author gratefully acknowledges Mary Harper for her comments on this work. Part of this work was conducted at Purdue University and continued at ICSI where the author is supported by DARPA under contract MDA972-02-C-0038. Thank Elizabeth Shriberg, Andreas Stolcke and Luciana Ferrer at SRI for their advice and help with the extraction of the prosodic features. They are supported by NSF IRI-9619921 and NASA Award NCC 2 1256. Any opinions expressed in this paper are those of the authors and do not necessarily reflect the view of DARPA, NSF, or NASA.
2003
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kuhlmann-oepen-2016-squibs
https://aclanthology.org/J16-4009
Squibs: Towards a Catalogue of Linguistic Graph Banks
Graphs exceeding the formal complexity of rooted trees are of growing relevance to much NLP research. Although formally well understood in graph theory, there is substantial variation in the types of linguistic graphs, as well as in the interpretation of various structural properties. To provide a common terminology and transparent statistics across different collections of graphs in NLP, we propose to establish a shared community resource with an open-source reference implementation for common statistics.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
madhyastha-etal-2016-mapping
https://aclanthology.org/W16-1612
Mapping Unseen Words to Task-Trained Embedding Spaces
We consider the supervised training setting in which we learn task-specific word embeddings. We assume that we start with initial embeddings learned from unlabelled data and update them to learn taskspecific embeddings for words in the supervised training data. However, for new words in the test set, we must use either their initial embeddings or a single unknown embedding, which often leads to errors. We address this by learning a neural network to map from initial embeddings to the task-specific embedding space, via a multi-loss objective function. The technique is general, but here we demonstrate its use for improved dependency parsing (especially for sentences with out-of-vocabulary words), as well as for downstream improvements on sentiment analysis.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their useful comments. This research was supported by a Google Faculty Research Award to Mohit Bansal, Karen Livescu and Kevin Gimpel.
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false