ID
stringlengths
11
54
url
stringlengths
33
64
title
stringlengths
11
184
abstract
stringlengths
17
3.87k
label_nlp4sg
bool
2 classes
task
list
method
list
goal1
stringclasses
9 values
goal2
stringclasses
9 values
goal3
stringclasses
1 value
acknowledgments
stringlengths
28
1.28k
year
stringlengths
4
4
sdg1
bool
1 class
sdg2
bool
1 class
sdg3
bool
2 classes
sdg4
bool
2 classes
sdg5
bool
2 classes
sdg6
bool
1 class
sdg7
bool
1 class
sdg8
bool
2 classes
sdg9
bool
2 classes
sdg10
bool
2 classes
sdg11
bool
2 classes
sdg12
bool
1 class
sdg13
bool
2 classes
sdg14
bool
1 class
sdg15
bool
1 class
sdg16
bool
2 classes
sdg17
bool
2 classes
effenberger-etal-2021-analysis-language
https://aclanthology.org/2021.findings-emnlp.239
Analysis of Language Change in Collaborative Instruction Following
We analyze language change over time in a collaborative, goal-oriented instructional task, where utility-maximizing participants form conventions and increase their expertise. Prior work studied such scenarios mostly in the context of reference games, and consistently found that language complexity is reduced along multiple dimensions, such as utterance length, as conventions are formed. In contrast, we find that, given the ability to increase instruction utility, instructors increase language complexity along these previously studied dimensions to better collaborate with increasingly skilled instruction followers. * Equal contribution. Decile 1: get the card in front Decile 5: Collect the green square card in front of you. Decile 10: turn around on the trail, go straight and get 2 green circles, continue straight on the trail to the right side of the glacier and get 1 black triangle.
false
[]
[]
null
null
null
This research was supported by NSF under grants No. 1750499, 1750499-REU, and DGE-1650441. It also received support from a Google Focused Award, the Break Through Tech summer internship program, and a Facebook PhD Fellowship. We thank Chris Potts and Robert Hawkins for early discussions that initiated this analysis; and Ge Gao and Forrest Davis for their comments.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bird-2022-local
https://aclanthology.org/2022.acl-long.539
Local Languages, Third Spaces, and other High-Resource Scenarios
How can language technology address the diverse situations of the world's languages? In one view, languages exist on a resource continuum and the challenge is to scale existing solutions, bringing under-resourced languages into the high-resource world. In another view, presented here, the world's language ecology includes standardised languages, local languages, and contact languages. These are often subsumed under the label of 'under-resourced languages' even though they have distinct functions and prospects. I explore this position and propose some ecologically-aware language technology agendas.
false
[]
[]
null
null
null
null
2022
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mehdad-etal-2010-towards
https://aclanthology.org/N10-1045
Towards Cross-Lingual Textual Entailment
This paper investigates cross-lingual textual entailment as a semantic relation between two text portions in different languages, and proposes a prospective research direction. We argue that cross-lingual textual entailment (CLTE) can be a core technology for several cross-lingual NLP applications and tasks. Through preliminary experiments, we aim at proving the feasibility of the task, and providing a reliable baseline. We also introduce new applications for CLTE that will be explored in future work.
false
[]
[]
null
null
null
This work has been partially supported by the ECfunded project CoSyne (FP7-ICT-4-24853)
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
he-etal-2020-syntactic
https://aclanthology.org/2020.coling-main.246
Syntactic Graph Convolutional Network for Spoken Language Understanding
Slot filling and intent detection are two major tasks for spoken language understanding. In most existing work, these two tasks are built as joint models with multi-task learning with no consideration of prior linguistic knowledge. In this paper, we propose a novel joint model that applies a graph convolutional network over dependency trees to integrate the syntactic structure for learning slot filling and intent detection jointly. Experimental results show that our proposed model achieves state-of-the-art performance on two public benchmark datasets and outperforms existing work. At last, we apply the BERT model to further improve the performance on both slot filling and intent detection. * The work was done when the first author was an intern at Meituan Group. The first two authors contribute equally.
false
[]
[]
null
null
null
The work was done when the first author was an intern at Meituan Dialogue Group. We thank Xiaojie Wang, Jiangnan Xia and Hengtong Lu for the discussion. We thank all anonymous reviewers for their constructive feedback.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nigmatulina-etal-2020-asr
https://aclanthology.org/2020.vardial-1.2
ASR for Non-standardised Languages with Dialectal Variation: the case of Swiss German
Strong regional variation, together with the lack of standard orthography, makes Swiss German automatic speech recognition (ASR) particularly difficult in a multi-dialectal setting. This paper focuses on one of the many challenges, namely, the choice of the output text to represent non-standardised Swiss German. We investigate two potential options: a) dialectal writing-approximate phonemic transcriptions that provide close correspondence between grapheme labels and the acoustic signal but are highly inconsistent and b) normalised writing-transcriptions resembling standard German that are relatively consistent but distant from the acoustic signal. To find out which writing facilitates Swiss German ASR, we build several systems using the Kaldi toolkit and a dataset covering 14 regional varieties. A formal comparison shows that the system trained on the normalised transcriptions achieves better results in word error rate (WER) (29.39%) but underperforms at the character level, suggesting dialectal transcriptions offer a viable solution for downstream applications where dialectal differences are important. To better assess word-level performance for dialectal transcriptions, we use a flexible WER measure (FlexWER). When evaluated with this metric, the system trained on dialectal transcriptions outperforms that trained on the normalised writing. Besides establishing a benchmark for Swiss German multi-dialectal ASR, our findings can be helpful in designing ASR systems for other languages without standard orthography.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yin-etal-2016-neural-generative
https://aclanthology.org/W16-0106
Neural Generative Question Answering
This paper presents an end-to-end neural network model, named Neural Generative Question Answering (GENQA), that can generate answers to simple factoid questions, based on the facts in a knowledge-base. More specifically, the model is built on the encoderdecoder framework for sequence-to-sequence learning, while equipped with the ability to enquire the knowledge-base, and is trained on a corpus of question-answer pairs, with their associated triples in the knowledge-base. Empirical study shows the proposed model can effectively deal with the variations of questions and answers, and generate right and natural answers by referring to the facts in the knowledge-base. The experiment on question answering demonstrates that the proposed model can outperform an embedding-based QA model as well as a neural dialogue model trained on the same data.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
stahlberg-etal-2016-edit
https://aclanthology.org/W16-2324
The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16
This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different combination schemes. Starting out with a simple neural lattice rescoring approach, we show that the Hiero lattices are often too narrow for NMT ensembles. Therefore, instead of a hard restriction of the NMT search space to the lattice, we propose to loosely couple NMT and Hiero by composition with a modified version of the edit distance transducer. The loose combination outperforms lattice rescoring, especially when using multiple NMT systems in an ensemble.
false
[]
[]
null
null
null
This work was supported by the U.K. Engineering and Physical Sciences Research Council (EPSRC grant EP/L027623/1).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
applegate-1960-syntax
https://aclanthology.org/1960.earlymt-nsmt.33
Syntax of the German Noun Phrase
It is generally agreed that a successful mechanical translation routine must be based on an accurate grammatical description of both the source and target languages. Furthermore, the description should be presented in a form that can easily be adapted for computer programming.
false
[]
[]
null
null
null
null
1960
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gautam-bhattacharyya-2014-layered
https://aclanthology.org/W14-3350
LAYERED: Metric for Machine Translation Evaluation
This paper describes the LAYERED metric which is used for the shared WMT'14 metrics task. Various metrics exist for MT evaluation: BLEU (Papineni, 2002), METEOR (Alon Lavie, 2007), TER (Snover, 2006) etc., but are found inadequate in quite a few language settings like, for example, in case of free word order languages. In this paper, we propose an MT evaluation scheme that is based on the NLP layers: lexical, syntactic and semantic. We contend that higher layer metrics are after all needed. Results are presented on the corpora of ACL-WMT, 2013 and 2014. We end with a metric which is composed of weighted metrics at individual layers, which correlates very well with human judgment.
false
[]
[]
null
null
null
null
2014
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
arase-etal-2020-annotation
https://aclanthology.org/2020.lrec-1.836
Annotation of Adverse Drug Reactions in Patients' Weblogs
Adverse drug reactions are a severe problem that significantly degrade quality of life, or even threaten the life of patients. Patientgenerated texts available on the web have been gaining attention as a promising source of information in this regard. While previous studies annotated such patient-generated content, they only reported on limited information, such as whether a text described an adverse drug reaction or not. Further, they only annotated short texts of a few sentences crawled from online forums and social networking services. The dataset we present in this paper is unique for the richness of annotated information, including detailed descriptions of drug reactions with full context. We crawled patient's weblog articles shared on an online patient-networking platform and annotated the effects of drugs therein reported. We identified spans describing drug reactions and assigned labels for related drug names, standard codes for the symptoms of the reactions, and types of effects. As a first dataset, we annotated 677 drug reactions with these detailed labels based on 169 weblog articles by Japanese lung cancer patients. Our annotation dataset is made publicly available for further research on the detection of adverse drug reactions and more broadly, on patient-generated text processing.
true
[]
[]
Good Health and Well-Being
null
null
We thank Kazuki Ashihara for his contribution to annotation as well as valuable discussions with us. This work was supported by JST AIP-PRISM Grant Number JP-MJCR18Y1, Japan.
2020
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kopotev-etal-2013-automatic
https://aclanthology.org/W13-1011
Automatic Detection of Stable Grammatical Features in N-Grams
This paper presents an algorithm that allows the user to issue a query pattern, collects multi-word expressions (MWEs) that match the pattern, and then ranks them in a uniform fashion. This is achieved by quantifying the strength of all possible relations between the tokens and their features in the MWEs. The algorithm collects the frequency of morphological categories of the given pattern on a unified scale in order to choose the stable categories and their values. For every part of speech, and for all of its categories, we calculate a normalized Kullback-Leibler divergence between the category's distribution in the pattern and its distribution in the corpus overall. Categories with the largest divergence are considered to be the most significant. The particular values of the categories are sorted according to a frequency ratio. As a result, we obtain morphosyntactic profiles of a given pattern, which includes the most stable category of the pattern, and their values.
false
[]
[]
null
null
null
We are very grateful to the Russian National Corpus developers, especially E. Rakhilina and O. Lyashevskaya, for providing us with the data.
2013
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
elsner-charniak-2008-coreference
https://aclanthology.org/P08-2011
Coreference-inspired Coherence Modeling
Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information. We apply a discourse-new classifier and pronoun coreference algorithm to the information ordering task, and show significant improvements in performance over the entity grid, a popular model of local coherence.
false
[]
[]
null
null
null
Chen and Barzilay, reviewers, DARPA, et al.
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gahl-1998-automatic
https://aclanthology.org/C98-1068
Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus
This paper presents a method for extracting sub.cor.pora documenting different subcategorlzatlon frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma.
false
[]
[]
null
null
null
This work grew out of an extremely enjoyable collaborative effort with Dr. Ulrich Heid of IMS Stuttgart and Dan Jurafsky of the University of Boulder, Colorado. I would like to thank Doug Roland and especially the untiring Collin Baker for their work on the macroprocessor. I would also like to thank the members of the FrameNet project for their comments and suggestions. I thank Judith Eckle-Kohler of IMS-Stuttgart, JB Lowe of ICSI-Berkeley and Dan Jurafsky for comments on an earlier draft of this paper.
1998
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
budin-etal-1999-integrating
https://aclanthology.org/1999.tc-1.15
Integrating Translation Technologies Using SALT
The acronym SALT stands for Standards-based Access to multilingual Lexicons and Terminologies. The objective of the SALT project is to develop and promote a range of tools that will be made available on the World Wide Web to various user groups, in particular translators, terminology managers, localizers, technical communicators, but also tools developers, database managers, and language engineers. The resulting toolkit will facilitate access and re-use of heterogeneous multilingual resources derived from both NLP lexicons and human-oriented terminology databases.
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chen-etal-2020-mpdd
https://aclanthology.org/2020.lrec-1.76
MPDD: A Multi-Party Dialogue Dataset for Analysis of Emotions and Interpersonal Relationships
A dialogue dataset is an indispensable resource for building a dialogue system. Additional information like emotions and interpersonal relationships labeled on conversations enables the system to capture the emotion flow of the participants in the dialogue. However, there is no publicly available Chinese dialogue dataset with emotion and relation labels. In this paper, we collect the conversions from TV series scripts, and annotate emotion and interpersonal relationship labels on each utterance. This dataset contains 25,548 utterances from 4,142 dialogues. We also set up some experiments to observe the effects of the responded utterance on the current utterance, and the correlation between emotion and relation types in emotion and relation classification tasks.
false
[]
[]
null
null
null
This research was partially supported by the Ministry of Science and Technology, Taiwan, under grants MOST-106-2923-E-002-012-MY3, MOST-108-2634-F-002-008-, MOST-108-2218-E-009-051-, and MOST-109-2634-F-002-034 and by Academia Sinica, Taiwan, under grant AS-TP-107-M05.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jelinek-lafferty-1991-computation
https://aclanthology.org/J91-3004
Computation of the Probability of Initial Substring Generation by Stochastic Context-Free Grammars
Speech recognition language models are based on probabilities P(Wk+I = v [ WlW2~..., Wk) that the next word Wk+l will be any particular word v of the vocabulary, given that the word sequence Wl, w2,..., Wk is hypothesized to have been uttered in the past. If probabilistic context-free grammars are to be used as the basis of the language model, it will be necessary to compute the probability that successive application of the grammar rewrite rules (beginning with the sentence start symbol s) produces a word string whose initial substring is an arbitrary sequence wl, w2,. .. , Wk+l. In this paper we describe a new algorithm that achieves the required computation in at most a constant times k3-steps.
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mihalcea-tarau-2004-textrank
https://aclanthology.org/W04-3252
TextRank: Bringing Order into Text
In this paper, we introduce TextRank-a graph-based ranking model for text processing, and show how this model can be successfully used in natural language applications. In particular, we propose two innovative unsupervised methods for keyword and sentence extraction, and show that the results obtained compare favorably with previously published results on established benchmarks.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
white-1993-delimitedness
https://aclanthology.org/E93-1048
Delimitedness and Trajectory-of-Motion Events
The first part of the paper develops a novel, sortally-based approach to the problem of aspectual composition. The account is argued to be superior on both empirical and computational grounds to previous semantic approaches relying on referential homogeneity tests. While the account is restricted to manner-of-motion verbs, it does cover their interaction with mass terms, amount phrases, locative PPs, and distance, frequency, and temporal modifiers. The second part of the paper describes an implemented system based on the theoretical treatment which determines whether a specified sequence of events is or is not possible under varying situationally supplied constraints, given certain restrictive and simplifying assumptions. Briefly, the system extracts a set of constraint equations from the derived logical forms and solves them according to a best-value metric. Three particular limitations of the system and possible ways of addressing them are discussed in the conclusion.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
perkoff-etal-2021-orthographic
https://aclanthology.org/2021.sigmorphon-1.10
Orthographic vs. Semantic Representations for Unsupervised Morphological Paradigm Clustering
This paper presents two different systems for unsupervised clustering of morphological paradigms, in the context of the SIGMOR-PHON 2021 Shared Task 2. The goal of this task is to correctly cluster words in a given language by their inflectional paradigm, without any previous knowledge of the language and without supervision from labeled data of any sort. The words in a single morphological paradigm are different inflectional variants of an underlying lemma, meaning that the words share a common core meaning. They alsousually-show a high degree of orthographical similarity. Following these intuitions, we investigate KMeans clustering using two different types of word representations: one focusing on orthographical similarity and the other focusing on semantic similarity. Additionally, we discuss the merits of randomly initialized centroids versus pre-defined centroids for clustering. Pre-defined centroids are identified based on either a standard longest common substring algorithm or a connected graph method built off of longest common substring. For all development languages, the characterbased embeddings perform similarly to the baseline, and the semantic embeddings perform well below the baseline. Analysis of the systems' errors suggests that clustering based on orthographic representations is suitable for a wide range of morphological mechanisms, particularly as part of a larger system.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chandran-nair-etal-2021-enough
https://aclanthology.org/2021.dravidianlangtech-1.13
Is this Enough?-Evaluation of Malayalam Wordnet
The quality of a product is the degree to which a product meets the Customer's expectations, which must also be valid in the case of lexical-semantic resources. Conducting a periodic evaluation of resources is essential to ensure that they meet a native speaker's expectations and are free from errors. This paper defines the possible errors that a lexical-semantic resource can contain, how they may impact downstream applications and explains the steps applied to evaluate and quantify the quality of Malayalam WordNet. Malayalam is one of the classical languages of India. We propose an approach allowing to subset the part of the WordNet tied to the lowest quality scores. We aim to work on this subset in a crowdsourcing context to improve the quality of the resource.
false
[]
[]
null
null
null
We are not taking these values as the final deciding factor. We will be using this low score synsets as a candidate set for our crowdsourcing application. This application will have different tasks like define gloss, provide Synset, validate the gloss, and so on.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bamman-etal-2020-annotated
https://aclanthology.org/2020.lrec-1.6
An Annotated Dataset of Coreference in English Literature
We present in this work a new dataset of coreference annotations for works of literature in English, covering 29,103 mentions in 210,532 tokens from 100 works of fiction. This dataset differs from previous coreference datasets in containing documents whose average length (2,105.3 words) is four times longer than other benchmark datasets (463.7 for OntoNotes), and contains examples of difficult coreference problems common in literature. This dataset allows for an evaluation of cross-domain performance for the task of coreference resolution, and analysis into the characteristics of long-distance within-document coreference.
false
[]
[]
null
null
null
The research reported in this article was supported by an Amazon Research Award and by resources provided by NVIDIA and Berkeley Research Computing.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wu-dredze-2020-explicit
https://aclanthology.org/2020.emnlp-main.362
Do Explicit Alignments Robustly Improve Multilingual Encoders?
Multilingual BERT (Devlin et al., 2019, mBERT), XLM-RoBERTa (Conneau et al., 2019, XLMR) and other unsupervised multilingual encoders can effectively learn crosslingual representation. Explicit alignment objectives based on bitexts like Europarl or Mul-tiUN have been shown to further improve these representations. However, word-level alignments are often suboptimal and such bitexts are unavailable for many languages. In this paper, we propose a new contrastive alignment objective that can better utilize such signal, and examine whether these previous alignment methods can be adapted to noisier sources of aligned data: a randomly sampled 1 million pair subset of the OPUS collection. Additionally, rather than report results on a single dataset with a single model run, we report the mean and standard derivation of multiple runs with different seeds, on four datasets and tasks. Our more extensive analysis finds that, while our new objective outperforms previous work, overall these methods do not improve performance with a more robust evaluation framework. Furthermore, the gains from using a better underlying model eclipse any benefits from alignment training. These negative results dictate more care in evaluating these methods and suggest limitations in applying explicit alignment objectives.
false
[]
[]
null
null
null
This research is supported in part by ODNI, IARPA, via the BETTER Program contract #2019-
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
qian-etal-2021-lifelong
https://aclanthology.org/2021.naacl-main.183
Lifelong Learning of Hate Speech Classification on Social Media
Existing work on automated hate speech classification assumes that the dataset is fixed and the classes are pre-defined. However, the amount of data in social media increases every day, and the hot topics changes rapidly, requiring the classifiers to be able to continuously adapt to new data without forgetting the previously learned knowledge. This ability, referred to as lifelong learning, is crucial for the realword application of hate speech classifiers in social media. In this work, we propose lifelong learning of hate speech classification on social media. To alleviate catastrophic forgetting, we propose to use Variational Representation Learning (VRL) along with a memory module based on LB-SOINN (Load-Balancing Self-Organizing Incremental Neural Network). Experimentally, we show that combining variational representation learning and the LB-SOINN memory module achieves better performance than the commonly-used lifelong learning techniques.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
yuan-bryant-2021-document
https://aclanthology.org/2021.bea-1.8
Document-level grammatical error correction
Document-level context can provide valuable information in grammatical error correction (GEC), which is crucial for correcting certain errors and resolving inconsistencies. In this paper, we investigate context-aware approaches and propose document-level GEC systems. Additionally, we employ a three-step training strategy to benefit from both sentence-level and document-level data. Our system outperforms previous document-level and all other NMT-based single-model systems, achieving state of the art on a common test set.
false
[]
[]
null
null
null
We would like to thank Cambridge Assessment for supporting this research, and the anonymous re-
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lascarides-oberlander-1993-temporal
https://aclanthology.org/E93-1031
Temporal Connectives in a Discourse Context
We examine the role of temporal connectives in multi-sentence discourse. In certain contexts, sentences containing temporal connectives that are equivalent in temporai structure can fail to be equivalent in terms of discourse coherence. We account for this by offering a novel, formal mechanism for accommodating the presuppositions in temporal subordinate clauses. This mechanism encompasses both accommodation by discourse aftachme,f and accommodation by temporal addition. As such, it offers a precise and systematic model of interactions between presupposed material, discourse context, and the reader's background knowledge. We show how the results of accommodation help to determine a discou~e's coherence.
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
benton-etal-2019-deep
https://aclanthology.org/W19-4301
Deep Generalized Canonical Correlation Analysis
We present Deep Generalized Canonical Correlation Analysis (DGCCA)-a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While methods for nonlinear twoview representation learning (Deep CCA, (Andrew et al., 2013)) and linear many-view representation learning (Generalized CCA (Horst, 1961)) exist, DGCCA combines the flexibility of nonlinear (deep) representation learning with the statistical power of incorporating information from many sources, or views. We present the DGCCA formulation as well as an efficient stochastic optimization algorithm for solving it. We learn and evaluate DGCCA representations for three downstream tasks: phonetic transcription from acoustic & articulatory measurements, recommending hashtags, and recommending friends on a dataset of Twitter users.
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
forbes-webber-2002-semantic
https://aclanthology.org/W02-0204
A Semantic Account of Adverbials as Discourse Connectives
null
false
[]
[]
null
null
null
null
2002
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
seker-etal-2018-universal
https://aclanthology.org/K18-2021
Universal Morpho-Syntactic Parsing and the Contribution of Lexica: Analyzing the ONLP Lab Submission to the CoNLL 2018 Shared Task
We present the contribution of the ONLP lab at the Open University of Israel to the CONLL 2018 UD SHARED TASK on MULTILINGUAL PARSING FROM RAW TEXT TO UNIVERSAL DEPENDENCIES. Our contribution is based on a transitionbased parser called yap: yet another parser which includes a standalone morphological model, a standalone dependency model, and a joint morphosyntactic model. In the task we used yap's standalone dependency parser to parse input morphologically disambiguated by UD-Pipe, and obtained the official score of 58.35 LAS. In a follow up investigation we use yap to show how the incorporation of morphological and lexical resources may improve the performance of end-to-end raw-to-dependencies parsing in the case of a morphologically-rich and low-resource language, Modern Hebrew. Our results on Hebrew underscore the importance of CoNLL-UL, a UD-compatible standard for accessing external lexical resources, for enhancing end-to-end UD parsing, in particular for morphologically rich and low-resource languages. We thus encourage the community to create, convert, or make available more such lexica.
false
[]
[]
null
null
null
We thank the CoNLL Shared Task Organizing Committee for their hard work and their timely support. We also thank the TIRA platform team (Potthast et al., 2014) for providing a system that facilitates competition and reproducible research. The research towards this shared task submission has been supported by the European Research Council, ERC-StG-2015 Grant 677352, and by an Israel Science Foundation (ISF) Grant 1739/26, for which we are grateful.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aberdeen-etal-2001-finding
https://aclanthology.org/H01-1028
Finding Errors Automatically in Semantically Tagged Dialogues
We describe a novel method for detecting errors in task-based human-computer (HC) dialogues by automatically deriving them from semantic tags. We examined 27 HC dialogues from the DARPA Communicator air travel domain, comparing user inputs to system responses to look for slot value discrepancies, both automatically and manually. For the automatic method, we labeled the dialogues with semantic tags corresponding to "slots" that would be filled in "frames" in the course of the travel task. We then applied an automatic algorithm to detect errors in the dialogues. The same dialogues were also manually tagged (by a different annotator) to label errors directly. An analysis of the results of the two tagging methods indicates that it may be possible to detect errors automatically in this way, but our method needs further work to reduce the number of false errors detected. Finally, we present a discussion of the differing results from the two tagging methods.
false
[]
[]
null
null
null
null
2001
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
vazquez-etal-2020-systematic
https://aclanthology.org/2020.cl-2.5
A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation
Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight Submission
false
[]
[]
null
null
null
This work is part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no 771113). The authors gratefully acknowledge the support of the Academy of Finland through project 314062 from the ICT 2023 call on Computation, Machine Learning and Artificial Intelligence and projects 270354 and 273457. Finally, we would also like to acknowledge CSC -IT Center for Science, Finland, for computational resources, as well as NVIDIA and their GPU grant.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
nirenburg-etal-1986-knowledge
https://aclanthology.org/C86-1148
On Knowledge-Based Machine Translation
This paper describes the design of tile knowledge representation medium used for representing concepts and assertions, respectively, in a subworld chosen for a knowledge-based machine u'anslation system. This design is used in the TRANSLATOR machine translation project. The kuowledge representation language, or interlingua, has two components, DIL and TIL. DIL stands for 'dictionary of interlingua' and descibes tile semantics of a subworld. TIL stands for 'text of interlingua' and is responsible for producing an interlingua text, which represents tile meaning of an input text in tile terms of trte interlingua. We maintain that involved analysis of various types of linguistic and eucyclopaedic meaniug is necessary for the task of autx)matic translatiou. The mechanisms for extracting and nlanipnlating and reproducing the nteaning of te~ts will be reported in detail elsewhere. The linguistic (inchlding tile syutactic) knowledge about source altd target languages is used by the nlechanisnls that translate texts into aud from the btterlingua. Since interlingua is an artificial langnage, we can (and do, through TII,) control tile syntax and semantics of the allowed interlingua elements. The interlingua, snggesled for TRANSI.ATOR has a ln'oader coverage than other knowledge re, presentation schemata for natural language. It involves the knowledge about discourse, speech acts, focus, thne, space and other facets of the overall meaning of texts.
false
[]
[]
null
null
null
Acknowledgement. The authors wish to thank Irene Nirenburg for reading, discussing and criticizing the numerous successive versions of the manuscript. Needless to say, it's we who are to blame for the remaining errors.
1986
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aarts-1995-acyclic
https://aclanthology.org/1995.iwpt-1.2
Acyclic Context-sensitive Grammars
A grammar formalism is introduced that generates parse trees with crossing branches. The uniform recognition problem is NP-complete, but for any fixed grammar the recognition problem is polynomial.
false
[]
[]
null
null
null
null
1995
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2015-binarized
https://aclanthology.org/D15-1250
A Binarized Neural Network Joint Model for Machine Translation
The neural network joint model (NNJM), which augments the neural network language model (NNLM) with an m-word source context window, has achieved large gains in machine translation accuracy, but also has problems with high normalization cost when using large vocabularies. Training the NNJM with noise-contrastive estimation (NCE), instead of standard maximum likelihood estimation (MLE), can reduce computation cost. In this paper, we propose an alternative to NCE, the binarized NNJM (BNNJM), which learns a binary classifier that takes both the context and target words as input, and can be efficiently trained using MLE. We compare the BNNJM and NNJM trained by NCE on various translation tasks.
false
[]
[]
null
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mariani-etal-2016-study
https://aclanthology.org/W16-1509
A Study of Reuse and Plagiarism in Speech and Natural Language Processing papers
The aim of this experiment is to present an easy way to compare fragments of texts in order to detect (supposed) results of copy & paste operations between articles in the domain of Natural Language Processing, including Speech Processing (NLP). The search space of the comparisons is a corpus labelled as NLP4NLP, which includes 34 different sources and gathers a large part of the publications in the NLP field over the past 50 years. This study considers the similarity between the papers of each individual source and the complete set of papers in the whole corpus, according to four different types of relationship (self-reuse, self-plagiarism, reuse and plagiarism) and in both directions: a source paper borrowing a fragment of text from another paper of the collection, or in the reverse direction, fragments of text from the source paper being borrowed and inserted in another paper of the collection.
true
[]
[]
Industry, Innovation and Infrastructure
Peace, Justice and Strong Institutions
null
null
2016
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
cunningham-etal-1997-gate
https://aclanthology.org/A97-2017
GATE - a General Architecture for Text Engineering
yorick@dcs, shef. ac. uk For a variety of reasons NLP has recently spawned a related engineering discipline called language engineering (LE), whose orientation is towards the application of NLP techniques to solving large-scale, real-world language processing problems in a robust and predictable way. Aside from the host of fundamental theoretical problems that remain to be answered in NLP, language engineering faces a variety of problems of its own. First, there is no theory of language which is universally accepted, and no computational model of even a part of the process of language understanding which stands uncontested. Second, building intelligent application systems, systems which model or reproduce enough human language processing capability to be useful, is a largescale engineering effort which, given political and economic realities, must rely on the efforts of many small groups of researchers, spatially and temporally distributed. The first point means that any attempt to push researchers into a theoretical or representational straight-jacket is premature, unhealthy and doomed to failure. The second means that no research team alone is likely to have the resources to build from scratch an entire state-of-the-art LE application system. Given this state of affairs, what is the best practical support that can be given to advance the field? Clearly, the pressure to build on the efforts of others demands that LE tools or component technologies be readily available for experimentation and reuse. But the pressure towards theoretical diversity means that there is no point attempting to gain agreement, in the short term, on what set of component technologies should be developed or on the informational content or syntax of representations that these components should require or produce. Our response has been to design and implement a software environment called GATE (Cunninham et al., 1997) , which we will demonstrate at ANLP. GATE attempts to meet the following objectives:
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
ahmed-nurnberger-2008-arabic
https://aclanthology.org/2008.eamt-1.3
Arabic/English word translation disambiguation using parallel corpora and matching schemes
The limited coverage of available Arabic language lexicons causes a serious challenge in Arabic cross language information retrieval. Translation in cross language information retrieval consists of assigning one of the semantic representation terms in the target language to the intended query. Despite the problem of the completeness of the dictionary, we also face the problem of which one of the translations proposed by the dictionary for each query term should be included in the query translations. In this paper, we describe the implementation and evaluation of an Arabic/English word translation disambiguation approach that is based on exploiting a large bilingual corpus and statistical co-occurrence to find the correct sense for the query translations terms. The correct word translations of the given query term are determined based on their cohesion with words in the training corpus and a special similarity score measure. The specific properties of the Arabic language that frequently hinder the correct match are taken into account.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gorz-paulus-1988-finite
https://aclanthology.org/C88-1043
A Finite State Approach to German Verb Morphology
This paper presents a new, language independent model for analysis and generation of word forms based on Finite State Transducers (FSTs). It has been completely implemented on a PC and successfully tested with lexicons and rules covering all of German verb morphology and the most interesting subsets of French and Spanish verbs as well. The linguistic databases consist of a'letter-tree structured lexicon with annc~ tated feature lists and a FST which is constructed from a set of morphophonological rules. These rewriting rules operate on complete words unlike other FST-based systems.
false
[]
[]
null
null
null
null
1988
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
lamont-2018-decomposing
https://aclanthology.org/W18-0310
Decomposing phonological transformations in serial derivations
While most phonological transformations have been shown to be subsequential, there are tonal processes that do not belong to any subregular class, thereby making it difficult to identify a tighter bound on the complexity of phonological processes than the regular languages. This paper argues that a tighter bound obtains from examining the way transformations are computed: when derived in serial, phonological processes can be decomposed into iterated subsequential maps.
false
[]
[]
null
null
null
This work has greatly benefited from discussions with Carolyn Anderson, Thomas Graf, Jeff Heinz, Adam Jardine, Gaja Jarosz, John McCarthy, Joe Pater, Brandon Prickett, Kristine Yu, participants in the Phonology Reading Group and Sound Workshop at the University of Massachusetts, Amherst, and the audience at NECPHON 11, as well as comments from three anonymous reviewers for SCiL 2018. This work was supported by the National Science Foundation through grant BCS-424077. All remaining errors are of course my own.
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
traum-etal-2004-evaluation
http://www.lrec-conf.org/proceedings/lrec2004/pdf/768.pdf
Evaluation of Multi-party Virtual Reality Dialogue Interaction
We describe a dialogue evaluation plan for a multi-character virtual reality training simulation. A multi-component evaluation plan is presented, including user satisfaction, intended task completion, recognition rate, and a new annotation scheme for appropriateness. Preliminary results for formative tests are also presented.
false
[]
[]
null
null
null
We would like to thank the many members of the MRE project team for help in this work. First, those who helped build parts of the system. Also Sheryl Kwak, Lori Weiss, Bryan Kramer, Dave Miraglia, Rob Groome, Jon Gratch, and Kate Labore for helping with the data collection, and Captain Roland Miraco and Sergeant Dan Johnson for helping find cadet trainees. Eduard Hovy, Shri Narayanan, Kevin Knight, and Anton Leuski have given useful advice on evaluation. The work described in this paper was supported by the Department of the Army under contract number DAAD 19-99-D-0046. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the Department of the Army.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
passonneau-etal-2010-learning
https://aclanthology.org/N10-1126
Learning about Voice Search for Spoken Dialogue Systems
In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.
false
[]
[]
null
null
null
This research was supported by the National Science Foundation under IIS-0745369, IIS-084966, and IIS-0744904. We thank the anonymous reviewers, the Heiskell Library, our CMU collaborators, our statistical wizard Liana Epstein, and our enthusiastic undergraduate research assistants.
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aly-etal-2021-fact
https://aclanthology.org/2021.fever-1.1
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) Shared Task
The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are SUPPORTED or REFUTED based on evidence retrieved from Wikipedia (or NOTENOUGHINFO if the claim cannot be verified). Compared to the FEVER 2018 shared task, the main challenge is the addition of structured data (tables and lists) as a source of evidence. The claims in the FEVEROUS dataset can be verified using only structured evidence, only unstructured evidence, or a mixture of both. Submissions are evaluated using the FEVEROUS score that combines label accuracy and evidence retrieval. Unlike FEVER 2018 (Thorne et al., 2018a), FEVEROUS requires partial evidence to be returned for NOTENOUGHINFO claims, and the claims are longer and thus more complex. The shared task received 13 entries, six of which were able to beat the baseline system. The winning team was "Bust a move!", achieving a FEVEROUS score of 27% (+9% compared to the baseline). In this paper we describe the shared task, present the full results and highlight commonalities and innovations among the participating systems. Claim: In the 2018 Naples general election, Roberto Fico, an Italian politician and member of the Five Star Movement, received 57,119 votes with 57.6 percent of the total votes.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
We would like to thank Amazon for sponsoring the dataset generation and supporting the FEVER workshop and the FEVEROUS shared task. Rami Aly is supported by the Engineering and Physical Sciences Research Council Doctoral Training Partnership (EPSRC). James Thorne is supported by an Amazon Alexa Graduate Research Fellowship. Zhijiang Guo, Michael Schlichtkrull and Andreas Vlachos are supported by the ERC grant AVeriTeC (GA 865958).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
utiyama-etal-2009-mining
https://aclanthology.org/2009.mtsummit-papers.18
Mining Parallel Texts from Mixed-Language Web Pages
We propose to mine parallel texts from mixedlanguage web pages. We define a mixedlanguage web page as a web page consisting of (at least) two languages. We mined Japanese-English parallel texts from mixedlanguage web pages. We presented the statistics for extracted parallel texts and conducted machine translation experiments. These statistics and experiments showed that mixedlanguage web pages are rich sources of parallel texts.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
zhang-etal-2021-crafting
https://aclanthology.org/2021.acl-long.153
Crafting Adversarial Examples for Neural Machine Translation
Effective adversary generation for neural machine translation (NMT) is a crucial prerequisite for building robust machine translation systems. In this work, we investigate veritable evaluations of NMT adversarial attacks, and propose a novel method to craft NMT adversarial examples. We first show the current NMT adversarial attacks may be improperly estimated by the commonly used monodirectional translation, and we propose to leverage the round-trip translation technique to build valid metrics for evaluating NMT adversarial attacks. Our intuition is that an effective NMT adversarial example, which imposes minor shifting on the source and degrades the translation dramatically, would naturally lead to a semantic-destroyed round-trip translation result. We then propose a promising black-box attack method called Word Saliency speedup Local Search (WSLS) that could effectively attack the mainstream NMT architectures. Comprehensive experiments demonstrate that the proposed metrics could accurately evaluate the attack effectiveness, and the proposed WSLS could significantly break the state-of-art NMT models with small perturbation. Besides, WSLS exhibits strong transferability on attacking Baidu and Bing online translators.
false
[]
[]
null
null
null
This work is supported by National Natural Science Foundation (62076105) and Microsft Research Asia Collaborative Research Fund (99245180). We thank Xiaosen Wang for helpful suggestions on our work.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
malmasi-etal-2015-oracle
https://aclanthology.org/W15-0620
Oracle and Human Baselines for Native Language Identification
We examine different ensemble methods, including an oracle, to estimate the upper-limit of classification accuracy for Native Language Identification (NLI). The oracle outperforms state-of-the-art systems by over 10% and results indicate that for many misclassified texts the correct class label receives a significant portion of the ensemble votes, often being the runner-up. We also present a pilot study of human performance for NLI, the first such experiment. While some participants achieve modest results on our simplified setup with 5 L1s, they did not outperform our NLI system, and this performance gap is likely to widen on the standard NLI setup.
false
[]
[]
null
null
null
We would like to thank the three anonymous reviewers as well as our raters: Martin Chodorow, Carla Parra Escartin, Marte Kvamme, Aasish Pappu, Dragomir Radev, Patti Spinner, Robert Stine, Kapil Thadani, Alissa Vik and Gloria Zen.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-lee-2018-learning
https://aclanthology.org/D18-1451
Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks
Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences, and unpaired abstractive summarization is thereby achieved. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2021-conversations
https://aclanthology.org/2021.acl-long.11
Conversations Are Not Flat: Modeling the Dynamic Information Flow across Dialogue Utterances
Nowadays, open-domain dialogue models can generate acceptable responses according to the historical context based on the large-scale pretrained language models. However, they generally concatenate the dialogue history directly as the model input to predict the response, which we named as the flat pattern and ignores the dynamic information flow across dialogue utterances. In this work, we propose the DialoFlow model, in which we introduce a dynamic flow mechanism to model the context flow, and design three training objectives to capture the information dynamics across dialogue utterances by addressing the semantic influence brought about by each utterance in large-scale pre-training. Experiments on the multi-reference Reddit Dataset and DailyDialog Dataset demonstrate that our DialoFlow significantly outperforms the DialoGPT on the dialogue generation task. Besides, we propose the Flow score, an effective automatic metric for evaluating interactive human-bot conversation quality based on the pre-trained DialoFlow, which presents high chatbot-level correlation (r = 0.9) with human ratings among 11 chatbots. Code and pre-trained models will be public. 1
false
[]
[]
null
null
null
We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. This work is supported by National Key R&D Program of China (NO. 2018AAA0102502).
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tomabechi-1991-quasi
https://aclanthology.org/1991.iwpt-1.19
Quasi-Destructive Graph Unification
Graph unification is the most expensive part of unification-based grammar parsing. It of ten takes over 90% of the total parsing time of a sentence. We focus on two speed-up
false
[]
[]
null
null
null
null
1991
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
avramidis-etal-2020-fine
https://aclanthology.org/2020.wmt-1.38
Fine-grained linguistic evaluation for state-of-the-art Machine Translation
This paper describes a test suite submission providing detailed statistics of linguistic performance for the state-of-the-art German-English systems of the Fifth Conference of Machine Translation (WMT20). The analysis covers 107 phenomena organized in 14 categories based on about 5,500 test items, including a manual annotation effort of 45 person hours. Two systems (Tohoku and VolcanTrans) appear to have significantly better test suite accuracy than the others, although the best system of WMT20 is not significantly better than the one from WMT19 in a macro-average. Additionally, we identify some linguistic phenomena where all systems suffer (such as idioms, resultative predicates and pluperfect), but we are also able to identify particular weaknesses for individual systems (such as quotation marks, lexical ambiguity and sluicing). Most of the systems of WMT19 which submitted new versions this year show improvements.
false
[]
[]
null
null
null
This research was supported by the German Research Foundation through the project TextQ and by the German Federal Ministry of Education through the project SocialWear.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
chakravarthi-etal-2020-corpus
https://aclanthology.org/2020.sltu-1.28
Corpus Creation for Sentiment Analysis in Code-Mixed Tamil-English Text
Understanding the sentiment of a comment from a video or an image is an essential task in many applications. Sentiment analysis of a text can be useful for various decision-making processes. One such application is to analyse the popular sentiments of videos on social media based on viewer comments. However, comments from social media do not follow strict rules of grammar, and they contain mixing of more than one language, often written in non-native scripts. Non-availability of annotated code-mixed data for a low-resourced language like Tamil also adds difficulty to this problem. To overcome this, we created a gold standard Tamil-English code-switched, sentiment-annotated corpus containing 15,744 comment posts from YouTube. In this paper, we describe the process of creating the corpus and assigning polarities. We present inter-annotator agreement and show the results of sentiment analysis trained on this corpus as a benchmark.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schmid-schulte-im-walde-2000-robust
https://aclanthology.org/C00-2105
Robust German Noun Chunking With a Probabilistic Context-Free Grammar
We present a noun chunker for German which is based on a head-lexicalised probabilistic contextfree grammar. A manually developed grammar was semi-automatically extended with robustness rules in order to allow parsing of unrestricted text. The model parameters were learned from unlabelled training data by a probabilistic context-free parser. For extracting noun chunks, the parser generates all possible noun chunk analyses, scores them with a n o vel algorithm which maximizes the best chunk sequence criterion, and chooses the most probable chunk sequence. An evaluation of the chunker on 2,140 hand-annotated noun chunks yielded 92 recall and 93 precision.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bredenkamp-etal-2000-looking
http://www.lrec-conf.org/proceedings/lrec2000/pdf/299.pdf
Looking for Errors: A Declarative Formalism for Resource-adaptive Language Checking
The paper describes a phenomenon-based approach to grammar checking, which draws on the integration of different shallow NLP technologies, including morphological and POS taggers, as well as probabilistic and rule-based partial parsers. We present a declarative specification formalism for grammar checking and controlled language applications which greatly facilitates the development of checking components.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
senellart-1999-semi
https://aclanthology.org/1999.eamt-1.5
Semi-automatic acquisition of lexical resources for new languages or new domains
null
false
[]
[]
null
null
null
null
1999
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
el-mekki-etal-2021-domain
https://aclanthology.org/2021.naacl-main.226
Domain Adaptation for Arabic Cross-Domain and Cross-Dialect Sentiment Analysis from Contextualized Word Embedding
Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications. Nevertheless, their generalization performance drops under domain shift. In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task. Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data. In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and crossdialect sentiment analysis from Contextualized Word Embedding. Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects. The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets. On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
fujiyoshi-2004-restrictions
https://aclanthology.org/C04-1012
Restrictions on Monadic Context-Free Tree Grammars
In this paper, subclasses of monadic contextfree tree grammars (CFTGs) are compared. Since linear, nondeleting, monadic CFTGs generate the same class of string languages as tree adjoining grammars (TAGs), it is examined whether the restrictions of linearity and nondeletion on monadic CFTGs are necessary to generate the same class of languages. Epsilonfreeness on linear, nondeleting, monadic CFTG is also examined. G * ⇒ α. The string language generated by G is L S (G) = yield(α) | α ∈ L(G). Note that L S (G) ⊆ (Σ 0 − ε) * .
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bertagna-etal-2004-content
http://www.lrec-conf.org/proceedings/lrec2004/pdf/743.pdf
Content Interoperability of Lexical Resources: Open Issues and ``MILE'' Perspectives
The paper tackles the issue of content interoperability among lexical resources, by presenting an experiment of mapping differently conceived lexicons, FrameNet and NOMLEX, onto MILE (Multilingual ISLE Lexical Entry), a meta-entry for the encoding of multilingual lexical information, acting as a general schema of shared and common lexical objects. The aim is to (i) raise problems and (ii) test the expressive potentialities of MILE as a standard environment for Computational Lexicons.
false
[]
[]
null
null
null
We want to dedicate this contribution to the memory of Antonio Zampolli, who has been the pioneer of standardization initiatives in Europe.
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schlichtkrull-martinez-alonso-2016-msejrku
https://aclanthology.org/S16-1209
MSejrKu at SemEval-2016 Task 14: Taxonomy Enrichment by Evidence Ranking
Automatic enrichment of semantic taxonomies with novel data is a relatively unexplored task with potential benefits in a broad array of natural language processing problems. Task 14 of SemEval 2016 poses the challenge of designing systems for this task. In this paper, we describe and evaluate several machine learning systems constructed for our participation in the competition. We demonstrate an f1-score of 0.680 for our submitted systems-a small improvement over the 0.679 produced by the hard baseline.
false
[]
[]
null
null
null
null
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gey-etal-2008-japanese
http://www.lrec-conf.org/proceedings/lrec2008/pdf/363_paper.pdf
A Japanese-English Technical Lexicon for Translation and Language Research
In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.
false
[]
[]
null
null
null
null
2008
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
oprea-magdy-2019-exploring
https://aclanthology.org/P19-1275
Exploring Author Context for Detecting Intended vs Perceived Sarcasm
We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling.
false
[]
[]
null
null
null
This work was supported in part by the EPSRC Centre for Doctoral Training in Data Science, funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016427/1); the University of Edinburgh; and The Financial Times.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
almeida-etal-2015-aligning
https://aclanthology.org/P15-1040
Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies
We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their insightful comments, and Richard Johansson for sharing his code and for answering several questions.This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803) and by a FCT grants UID/EEA/50008/2013 and PTDC/EEI-SII/2312/2012.
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aloraini-poesio-2020-cross
https://aclanthology.org/2020.lrec-1.11
Cross-lingual Zero Pronoun Resolution
In languages like Arabic, Chinese, Italian, Japanese, Korean, Portuguese, Spanish, and many others, predicate arguments in certain syntactic positions are not realized instead of being realized as overt pronouns, and are thus called zero-or null-pronouns. Identifying and resolving such omitted arguments is crucial to machine translation, information extraction and other NLP tasks, but depends heavily on semantic coherence and lexical relationships. We propose a BERT-based cross-lingual model for zero pronoun resolution, and evaluate it on the Arabic and Chinese portions of OntoNotes 5.0. As far as we know, ours is the first neural model of zero-pronoun resolution for Arabic; and our model also outperforms the state-of-the-art for Chinese. In the paper we also evaluate BERT feature extraction and fine-tune models on the task, and compare them with our model. We also report on an investigation of BERT layers indicating which layer encodes the most suitable representation for the task. Our code is available at https://github.com/amaloraini/cross-lingual-ZP.
false
[]
[]
null
null
null
We would like to thank the anonymous reviewers for their insightful comments and suggestions which helped to improve the quality of the paper.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
jwalapuram-2017-evaluating
https://doi.org/10.26615/issn.1314-9156.2017_003
Evaluating Dialogs based on Grice's Maxims
null
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
heeman-2007-combining
https://aclanthology.org/N07-1034
Combining Reinformation Learning with Information-State Update Rules
Reinforcement learning gives a way to learn under what circumstances to perform which actions. However, this approach lacks a formal framework for specifying hand-crafted restrictions, for specifying the effects of the system actions, or for specifying the user simulation. The information state approach, in contrast, allows system and user behavior to be specified as update rules, with preconditions and effects. This approach can be used to specify complex dialogue behavior in a systematic way. We propose combining these two approaches, thus allowing a formal specification of the dialogue behavior, and allowing hand-crafted preconditions, with remaining ones determined via reinforcement learning so as to minimize dialogue cost.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wiebe-1993-issues
https://aclanthology.org/W93-0239
Issues in Linguistic Segmentation
This paper addresses discourse structure from the perspective of understanding. It would perhaps help us understand the na,ture of discourse relatiolls il" we better understood what units of a text. can be related to one a.nother. In Olle ma.jor theory of discourse structure, Rhetorical Structure Theory (Mann &: Thompson 1988; Imrea.l'ter simply RS'T), the smallest possible linguistic units that can lmrtMl)ate in a rhetorical rela.tion a,re called units,
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mihaylov-etal-2015-exposing
https://aclanthology.org/R15-1058
Exposing Paid Opinion Manipulation Trolls
Recently, Web forums have been invaded by opinion manipulation trolls. Some trolls try to influence the other users driven by their own convictions, while in other cases they can be organized and paid, e.g., by a political party or a PR agency that gives them specific instructions what to write. Finding paid trolls automatically using machine learning is a hard task, as there is no enough training data to train a classifier; yet some test data is possible to obtain, as these trolls are sometimes caught and widely exposed. In this paper, we solve the training data problem by assuming that a user who is called a troll by several different people is likely to be such, and one who has never been called a troll is unlikely to be such. We compare the profiles of (i) paid trolls vs. (ii) "mentioned" trolls vs. (iii) non-trolls, and we further show that a classifier trained to distinguish (ii) from (iii) does quite well also at telling apart (i) from (iii).
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2015
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
tarmom-etal-2020-automatic
https://aclanthology.org/2020.icon-main.4
Automatic Hadith Segmentation using PPM Compression
In this paper we explore the use of Prediction by partial matching (PPM) compression based to segment Hadith into its two main components (Isnad and Matan). The experiments utilized the PPMD variant of the PPM, showing that PPMD is effective in Hadith segmentation. It was also tested on Hadith corpora of different structures. In the first experiment we used the nonauthentic Hadith (NAH) corpus for training models and testing, and in the second experiment we used the NAH corpus for training models and the Leeds University and King Saud University (LK) Hadith corpus for testing PPMD segmenter. PPMD of order 7 achieved an accuracy of 92.76% and 90.10% in the first and second experiments, respectively.
false
[]
[]
null
null
null
The first author is grateful to the Saudi government for their support.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
basile-etal-2021-probabilistic
https://aclanthology.org/2021.ranlp-1.16
Probabilistic Ensembles of Zero- and Few-Shot Learning Models for Emotion Classification
Emotion Classification is the task of automatically associating a text with a human emotion. State-of-the-art models are usually learned using annotated corpora or rely on hand-crafted affective lexicons. We present an emotion classification model that does not require a large annotated corpus to be competitive. We experiment with pretrained language models in both a zero-shot and few-shot configuration. We build several of such models and consider them as biased, noisy annotators, whose individual performance is poor. We aggregate the predictions of these models using a Bayesian method originally developed for modelling crowdsourced annotations. Next, we show that the resulting system performs better than the strongest individual model. Finally, we show that when trained on few labelled data, our systems outperform fully-supervised models.
false
[]
[]
null
null
null
null
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
li-etal-2019-information
https://aclanthology.org/N19-1359
Information Aggregation for Multi-Head Attention with Routing-by-Agreement
Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces. Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention. In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm. Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes. Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.
false
[]
[]
null
null
null
Jian Li and Michael R. Lyu were supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund), and Microsoft Research Asia (2018 Microsoft Research Asia Collaborative Research Award). We thank the anonymous reviewers for their insightful comments and suggestions.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
wang-etal-2020-negative
https://aclanthology.org/2020.emnlp-main.359
On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment
Modern multilingual models are trained on concatenated text from multiple languages in hopes of conferring benefits to each (positive transfer), with the most pronounced benefits accruing to low-resource languages. However, recent work has shown that this approach can degrade performance on high-resource languages, a phenomenon known as negative interference. In this paper, we present the first systematic study of negative interference. We show that, contrary to previous belief, negative interference also impacts low-resource languages. While parameters are maximally shared to learn language-universal structures, we demonstrate that language-specific parameters do exist in multilingual models and they are a potential cause of negative interference. Motivated by these observations, we also present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference, by adding languagespecific layers as meta-parameters and training them in a manner that explicitly improves shared layers' generalization on all languages. Overall, our results show that negative interference is more common than previously known, suggesting new directions for improving multilingual representations. 1
false
[]
[]
null
null
null
We want to thank Jaime Carbonell for his support on the early stage of this project. We also would like to thank Zihang Dai, Graham Neubig, Orhan Firat, Yuan Cao, Jiateng Xie, Xinyi Wang, Ruochen Xu and Yiheng Zhou for insightful discussions. Lastly, we thank anonymous reviewers for their valueable feedbacks.
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
besacier-etal-2010-lig
https://aclanthology.org/2010.iwslt-evaluation.12
LIG statistical machine translation systems for IWSLT 2010
null
false
[]
[]
null
null
null
null
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dou-etal-2019-unsupervised
https://aclanthology.org/D19-1147
Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings
The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model. 1
false
[]
[]
null
null
null
We are grateful to Xinyi Wang and anonymous reviewers for their helpful suggestions and insightful comments. We also thank Zhi-Hao Zhou, Shuyan Zhou and Anna Belova for proofreading the paper.This material is based upon work generously supported partly by the National Science Foundation under grant 1761548 and the Defense Advanced Research Projects Agency Information In-novation Office (I2O) Low Resource Languages for Emergent Incidents (LORELEI) program under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
read-etal-2012-sentence
https://aclanthology.org/C12-2096
Sentence Boundary Detection: A Long Solved Problem?
We review the state of the art in automated sentence boundary detection (SBD) for English and call for a renewed research interest in this foundational first step in natural language processing. We observe severe limitations in comparability and reproducibility of earlier work and a general lack of knowledge about genre-and domain-specific variations. To overcome these barriers, we conduct a systematic empirical survey of a large number of extant approaches, across a broad range of diverse corpora. We further observe that much previous work interpreted the SBD task too narrowly, leading to overly optimistic estimates of SBD performance on running text. To better relate SBD to practical NLP use cases, we thus propose a generalized definition of the task, eliminating text-or language-specific assumptions about candidate boundary points. More specifically, we quantify degrees of variation across 'standard' corpora of edited, relatively formal language, as well as performance degradation when moving to less formal language, viz. various samples of user-generated Web content. For these latter types of text, we demonstrate how moderate interpretation of document structure (as is now often available more or less explicitly through markup) can substantially contribute to overall SBD performance.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
muller-etal-2021-unseen
https://aclanthology.org/2021.naacl-main.38
When Being Unseen from mBERT is just the Beginning: Handling New Languages With Multilingual Language Models
Transfer learning based on pretraining language models on a large amount of raw data has become a new norm to reach state-of-theart performance in NLP. Still, it remains unclear how this approach should be applied for unseen languages that are not covered by any available large-scale multilingual language model and for which only a small amount of raw data is generally available. In this work, by comparing multilingual and monolingual models, we show that such models behave in multiple ways on unseen languages. Some languages greatly benefit from transfer learning and behave similarly to closely related high resource languages whereas others apparently do not. Focusing on the latter, we show that this failure to transfer is largely related to the impact of the script used to write such languages. We show that transliterating those languages significantly improves the potential of large-scale multilingual language models on downstream tasks. This result provides a promising direction towards making these massively multilingual models useful for a new set of unseen languages. 1
false
[]
[]
null
null
null
The Inria authors were partly funded by two French Research National agency projects, namely projects PARSITI (ANR-16-CE33-0021) and SoSweet (ANR-15-CE38-0011), as well as by Benoit Sagot's chair in the PRAIRIE institute as part of the "Investissements d'avenir" programme under the reference ANR-19-P3IA-0001. Antonios Anastasopoulos is generously supported by NSF Award 2040926 and is also thankful to Graham Neubig for very insightful initial discussions on this research direction..
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tomokiyo-ries-1997-makes
https://aclanthology.org/W97-1008
What makes a word: Learning base units in Japanese for speech recognition
We describe an automatic process for learning word units in Japanese. Since the Japanese orthography has no spaces delimiting words, the first step in building a Japanese speech recognition system is to define the units that will be recognized. Our method applies a compound-finding algorithm, previously used to find word sequences in English, to learning syllable sequences in Japanese. We report that we were able not only to extract meaningful units, eliminating the need for possibly inconsistent manual segmentation, but also to decrease perplexity using this automatic procedure, which relies on a statistical, not syntactic, measure of relevance. Our algorithm also uncovers the kinds of environments that help the recognizer predict phonological alternations, which are often hidden by morphologically-motivated tokenization.
false
[]
[]
null
null
null
null
1997
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
takmaz-etal-2020-generating
https://aclanthology.org/2020.emnlp-main.377
Generating Image Descriptions via Sequential Cross-Modal Alignment Guided by Human Gaze
When speakers describe an image, they tend to look at objects before mentioning them. In this paper, we investigate such sequential crossmodal alignment by modelling the image description generation process computationally. We take as our starting point a state-of-theart image captioning system and develop several model variants that exploit information from human gaze patterns recorded during language production. In particular, we propose the first approach to image description generation where visual processing is modelled sequentially. Our experiments and analyses confirm that better descriptions can be obtained by exploiting gaze-driven attention and shed light on human cognitive processes by comparing different ways of aligning the gaze modality with language production. We find that processing gaze data sequentially leads to descriptions that are better aligned to those produced by speakers, more diverse, and more naturalparticularly when gaze is encoded with a dedicated recurrent component.
false
[]
[]
null
null
null
We are grateful to Lieke Gelderloos for her help with the Dutch transcriptions, and to Jelle Zuidema and the participants of EurNLP 2019 for their feedback on a preliminary version of the work. Lisa Beinborn worked on the project mostly when being employed at the University of Amsterdam. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455 awarded to Raquel Fernández).
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
scharffe-2017-class
https://aclanthology.org/W17-7303
Class Disjointness Constraints as Specific Objective Functions in Neural Network Classifiers
Increasing performance of deep learning techniques on computer vision tasks like object detection has led to systems able to detect a large number of classes of objects. Most deep learning models use simple unstructured labels and assume that any domain knowledge will be learned from the data. However when the domain is complex and the data limited, it may be useful to use domain knowledge encoded in an ontology to guide the learning process. In this paper, we conduct experiments to introduce constraints into the training process of a neural network. We show that explicitly modeling a disjointness axiom between a set of classes as a specific objective function leads to reducing violations for this constraint, while also reducing the overall classification error. This opens a way to import domain knowledge modeled in an ontology into a deep learning process.
false
[]
[]
null
null
null
null
2017
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bowman-zhu-2019-deep
https://aclanthology.org/N19-5002
Deep Learning for Natural Language Inference
The task of natural language inference (NLI; also known as recognizing textual entailment, or RTE) asks a system to evaluate the relationships between the truth-conditional meanings of two sentences or, in other words, decide whether one sentence follows from another. This task neatly isolates the core NLP problem of sentence understanding as a classification problem, and also offers promise as an intermediate step in the building of complex systems (Dagan et al., 2005; MacCartney, 2009; Bowman et al., 2015) .
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kang-etal-2018-dataset
https://aclanthology.org/N18-1149
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP Applications
Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1), 1 providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
This work would not have been possible without the efforts of Rich Gerber and Paolo Gai (developers of the softconf.com conference management system), Stefan Riezler, Yoav Goldberg (chairs of CoNLL 2016), Min-Yen Kan, Regina Barzilay (chairs of ACL 2017) for allowing authors and reviewers to opt-in for this dataset during the official review process. We thank the openreview.net, arxiv.org and semanticscholar.org teams for their commitment to promoting transparency and openness in scientific communication. We also thank Peter Clark, Chris Dyer, Oren Etzioni, Matt Gardner, Nicholas FitzGerald, Dan Jurafsky, Hao Peng, Minjoon Seo, Noah A. Smith, Swabha Swayamdipta, Sam Thomson, Trang Tran, Vicki Zayats and Luke Zettlemoyer for their helpful comments.
2018
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
kim-lee-2000-decision
https://aclanthology.org/C00-2156
Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean
In this paper, we present a new phrase break prediction architecture that integrates probabilistic approach with decision-tree based error correction. The probabilistic method alone usually su ers from performance degradation due to inherent data sparseness problems and it only covers a limited range of contextual information. Moreover, the module can not utilize the selective morpheme tag and relative distance to the other phrase breaks. The decision-tree based error correction was tightly integrated to overcome these limitations. The initially phrase break tagged morpheme sequence is corrected with the error correcting decision tree which w as induced by C4.5 from the correctly tagged corpus with the output of the probabilistic predictor. The decision tree-based post error correction provided improved results even with the phrase break predictor that has poor initial performance. Moreover, the system can be exibly tuned to new corpus without massive retraining.
false
[]
[]
null
null
null
null
2000
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
aulamo-tiedemann-2019-opus
https://aclanthology.org/W19-6146
The OPUS Resource Repository: An Open Package for Creating Parallel Corpora and Machine Translation Services
This paper presents a flexible and powerful system for creating parallel corpora and for running neural machine translation services. Our package provides a scalable data repository backend that offers transparent data pre-processing pipelines and automatic alignment procedures that facilitate the compilation of extensive parallel data sets from a variety of sources. Moreover, we develop a web-based interface that constitutes an intuitive frontend for end-users of the platform. The whole system can easily be distributed over virtual machines and implements a sophisticated permission system with secure connections and a flexible database for storing arbitrary metadata. Furthermore, we also provide an interface for neural machine translation that can run as a service on virtual machines, which also incorporates a connection to the data repository software.
false
[]
[]
null
null
null
The work was supported by the Swedish Culture Foundation and we are grateful for the resources provided by the Finnish IT Center for Science, CSC.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
shim-kim-1993-towards
https://aclanthology.org/1993.tmi-1.24
Towards a Machine Translation System with Self-Critiquing Capability
null
false
[]
[]
null
null
null
null
1993
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
kaeshammer-demberg-2012-german
http://www.lrec-conf.org/proceedings/lrec2012/pdf/398_Paper.pdf
German and English Treebanks and Lexica for Tree-Adjoining Grammars
We present a treebank and lexicon for German and English, which have been developed for PLTAG parsing. PLTAG is a psycholinguistically motivated, incremental version of tree-adjoining grammar (TAG). The resources are however also applicable to parsing with other variants of TAG. The German PLTAG resources are based on the TIGER corpus and, to the best of our knowledge, constitute the first scalable German TAG grammar. The English PLTAG resources go beyond existing resources in that they include the NP annotation by (Vadas and Curran, 2007), and include the prediction lexicon necessary for PLTAG.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bick-2004-named
http://www.lrec-conf.org/proceedings/lrec2004/pdf/99.pdf
A Named Entity Recognizer for Danish
This paper describes how a preexisting Constraint Grammar based parser for Danish (DanGram, Bick 2002) has been adapted and semantically enhanced in order to accommodate for named entity recognition (NER), using rule based and lexical, rather than probabilistic methodology. The project is part of a multilingual Nordic initiative, Nomen Nescio, which targets 6 primary name types (human, organisation, place, event, title/semantic product and brand/object). Training data, examples and statistical text data specifics were taken from the Korpus90/2000 annotation initiative (Bick 2003-1). The NER task is addressed following the progressive multi-level parsing architecture of DanGram, delegating different NER-subtasks to different specialised levels. Thus named entities are successively treated as first strings, words, types, and then as contextual units at the morphological, syntactic and semantic levels, consecutively. While lower levels mainly use pattern matching tools, the higher levels make increasing use of context based Constraint Grammar rules on the one hand, and lexical information, both morphological and semantic, on the other hand. Levels are implemented as a sequential chain of Perl-programs and CG-grammars. Two evaluation runs on Korpus90/2000 data showed about 2% chunking errors and false positive/false negative proper noun readings (originating at the lower levels), while the NER-typer as such had a 5% error rate with 0.1-0.5% remaining ambiguity, if measured only for correctly chunked proper nouns.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
yang-etal-2019-tokyotech
https://aclanthology.org/S19-2061
TokyoTech\_NLP at SemEval-2019 Task 3: Emotion-related Symbols in Emotion Detection
This paper presents our contextual emotion detection system in approaching the SemEval-2019 shared task 3: EmoContext: Contextual Emotion Detection in Text. This system cooperates with an emotion detection neural network method (Poria et al., 2017), emoji2vec (Eisner et al., 2016) embedding, word2vec embedding (Mikolov et al., 2013), and our proposed emoticon and emoji preprocessing method. The experimental results demonstrate the usefulness of our emoticon and emoji prepossessing method, and representations of emoticons and emoji contribute model's emotion detection.
false
[]
[]
null
null
null
The research results have been achieved by "Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation", the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan.
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
freibott-1992-computer
https://aclanthology.org/1992.tc-1.5
Computer aided translation in an integrated document production process: Tools and applications
The intemationalisation of markets, the ever shortening life cycles of products as well as the increasing importance of information technology all demand a change in technical equipment, the software used on it and the organisational structures and processes in our working environment. Translation as a whole, but in particular as an integral part of the document production process, has to cope with these changes and with new and additional requirements. This paper describes the organisational and technical solutions developed and implemented in an industrial company for a number of computer aided translation applications integrated in the document production process to meet these requirements and to ensure high-quality mono and multilingual documentation on restricted budgetary grounds.
true
[]
[]
Industry, Innovation and Infrastructure
null
null
null
1992
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
kozareva-hovy-2011-insights
https://aclanthology.org/P11-1162
Insights from Network Structure for Text Mining
Text mining and data harvesting algorithms have become popular in the computational linguistics community. They employ patterns that specify the kind of information to be harvested, and usually bootstrap either the pattern learning or the term harvesting process (or both) in a recursive cycle, using data learned in one step to generate more seeds for the next. They therefore treat the source text corpus as a network, in which words are the nodes and relations linking them are the edges. The results of computational network analysis, especially from the world wide web, are thus applicable. Surprisingly, these results have not yet been broadly introduced into the computational linguistics community. In this paper we show how various results apply to text mining, how they explain some previously observed phenomena, and how they can be helpful for computational linguistics applications.
false
[]
[]
null
null
null
We acknowledge the support of DARPA contract number FA8750-09-C-3705 and NSF grant IIS-0429360. We would like to thank Sujith Ravi for his useful comments and suggestions.
2011
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
etchegoyhen-etal-2016-exploiting
https://aclanthology.org/L16-1560
Exploiting a Large Strongly Comparable Corpus
This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.
false
[]
[]
null
null
null
Acknowledgements The authors wish to thank Euskal Irrati Telebista, for providing the resources and agreeing to share them with the research community, and the three anonymous LREC reviewers for their constructive feedback. This work was partially supported by the Basque Government through its funding of project PLATA (Gaitek Programme, 2012-2014).
2016
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
guo-etal-2021-bertweetfr
https://aclanthology.org/2021.wnut-1.49
BERTweetFR : Domain Adaptation of Pre-Trained Language Models for French Tweets
We introduce BERTweetFR, the first largescale pre-trained language model for French tweets. Our model is initialized using the general-domain French language model CamemBERT (Martin et al., 2020) which follows the base architecture of BERT. Experiments show that BERTweetFR outperforms all previous general-domain French language models on two downstream Twitter NLP tasks of offensiveness identification and named entity recognition. The dataset used in the offensiveness detection task is first created and annotated by our team, filling in the gap of such analytic datasets in French. We make our model publicly available in the transformers library with the aim of promoting future research in analytic tasks for French tweets.
false
[]
[]
null
null
null
This research is supported by the French National research agency (ANR) via the ANR XCOVIF (AAP RA-COVID-19 V6) project. We would also like to thank the National Center for Scientific Research (CNRS) for giving us access to their Jean Zay supercomputer.
2021
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
schockaert-2018-knowledge
https://aclanthology.org/W18-4006
Knowledge Representation with Conceptual Spaces
is a professor at Cardiff University. His current research interests include commonsense reasoning, interpretable machine learning, vagueness and uncertainty modelling, representation learning, and information retrieval. He holds an ERC Starting Grant, and has previously been supported by funding from the Leverhulme Trust, EPSRC, and FWO, among others. He was the recipient of the 2008 ECCAI Doctoral Dissertation Award and the IBM Belgium Prize for Computer Science. He is on the board of directors of EurAI, on the editorial board of Artificial Intelligence and an area editor for Fuzzy Sets and Systems. He was PC co-chair of SUM 2016 and the general chair of UKCI 2017.
false
[]
[]
null
null
null
null
2018
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
gasperin-briscoe-2008-statistical
https://aclanthology.org/C08-1033
Statistical Anaphora Resolution in Biomedical Texts
This paper presents a probabilistic model for resolution of non-pronominal anaphora in biomedical texts. The model seeks to find the antecedents of anaphoric expressions, both coreferent and associative ones, and also to identify discourse-new expressions. We consider only the noun phrases referring to biomedical entities. The model reaches state-of-the art performance: 56-69% precision and 54-67% recall on coreferent cases, and reasonable performance on different classes of associative cases.
true
[]
[]
Good Health and Well-Being
null
null
This work is part of the BBSRC-funded FlySlip project. Caroline Gasperin is funded by a CAPES award from the Brazilian government.
2008
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bond-etal-1994-countability
https://aclanthology.org/C94-1002
Countability and Number in Japanese to English Machine Translation
This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English.noun phrases. Incorporating this method into the machine translation system ALTJ/E, helped tO raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.
false
[]
[]
null
null
null
null
1994
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bender-2012-100
https://aclanthology.org/N12-4001
100 Things You Always Wanted to Know about Linguistics But Were Afraid to Ask*
Many NLP tasks have at their core a subtask of extracting the dependencies-who did what to whom-from natural language sentences. This task can be understood as the inverse of the problem solved in different ways by diverse human languages, namely, how to indicate the relationship between different parts of a sentence. Understanding how languages solve the problem can be extremely useful in both feature design and error analysis in the application of machine learning to NLP. Likewise, understanding cross-linguistic variation can be important for the design of MT systems and other multilingual applications. The purpose of this tutorial is to present in a succinct and accessible fashion information about the structure of human languages that can be useful in creating more linguistically sophisticated, more language independent, and thus more successful NLP systems.
false
[]
[]
null
null
null
null
2012
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bojar-etal-2010-data
http://www.lrec-conf.org/proceedings/lrec2010/pdf/756_Paper.pdf
Data Issues in English-to-Hindi Machine Translation
Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn't, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide crossevaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language.
false
[]
[]
null
null
null
The research has been supported by the grants MSM0021620838 (Czech Ministry of Education) and EuromatrixPlus (FP7-ICT-2007-3-231720 of the EU and 7E09003 of the Czech Republic).
2010
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
tenfjord-etal-2006-ask
http://www.lrec-conf.org/proceedings/lrec2006/pdf/573_pdf.pdf
The ASK Corpus - a Language Learner Corpus of Norwegian as a Second Language
In our paper we present the design and interface of ASK, a language learner corpus of Norwegian as a second language which contains essays collected from language tests on two different proficiency levels as well as personal data from the test takers. In addition, the corpus also contains texts and relevant personal data from native Norwegians as control data. The texts as well as the personal data are marked up in XML according to the TEI Guidelines. In order to be able to classify "errors" in the texts, we have introduced new attributes to the TEI corr and sic tags. For each error tag, a correct form is also in the text annotation. Finally, we employ an automatic tagger developed for standard Norwegian, the "Oslo-Bergen Tagger", together with a facility for manual tag correction. As corpus query system, we are using the Corpus Workbench developed at the University of Stuttgart together with a web search interface developed at Aksis, University of Bergen. The system allows for searching for combinations of words, error types, grammatical annotation and personal data.
true
[]
[]
Quality Education
null
null
null
2006
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
garain-basu-2019-titans-semeval
https://aclanthology.org/S19-2133
The Titans at SemEval-2019 Task 6: Offensive Language Identification, Categorization and Target Identification
This system paper is a description of the system submitted to "SemEval-2019 Task 6", where we had to detect offensive language in Twitter. There were two specific target audiences, immigrants and women. The language of the tweets was English. We were required to first detect whether a tweet contains offensive content, and then we had to find out whether the tweet was targeted against some individual, group or other entity. Finally we were required to classify the targeted audience.
true
[]
[]
Peace, Justice and Strong Institutions
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
lin-etal-2019-ji
https://aclanthology.org/2019.rocling-1.13
基於深度學習之簡答題問答系統初步探討(A Preliminary Study on Deep Learning-based Short Answer Question Answering System)
null
false
[]
[]
null
null
null
null
2019
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
quochi-2004-representing
http://www.lrec-conf.org/proceedings/lrec2004/pdf/463.pdf
Representing Italian Complex Nominals: A Pilot Study
A corpus-based investigation of Italian Complex Nominals (CNs), of the form N+PP, which aims at clarifying their syntactic and semantic constitution, is presented. The main goal is to find out useful parameters for their representation in a computational lexicon. As a reference model we have taken an implementation of Pustejovsky's Generative Lexicon Theory (1995), the SIMPLE Italian Lexicon, and in particular the Extended Qualia Structure. Italian CN formation mainly exploits post-modification; of particular interest here are CNs of the kind N+PP since this syntactic pattern is highly productive in Italian and such CNs very often translate compound nouns of other languages. One of the major problems posed by CNs for interpretation is the retrieval or identification of the semantic relation linking their components, which is (at least partially) implicit on the surface. Studying a small sample, we observed some interesting facts that could be useful when setting up a larger experiment to identify semantic relations and/or automatically learn the syntactic peculiarities of given semantic paradigms. Finally, a set of representational features exploiting the results from our corpus is proposed.
false
[]
[]
null
null
null
null
2004
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
bajcsy-joshi-1978-problem
https://aclanthology.org/J78-3028
The Problem of Naming Shapes: Vision-Language Interface
Philadelphia, PA 19104 In this paper, we will pose more questions than-present solutions. We want to raise some questions in the context of the representation of shapes of 3-D objects. One way to get a handle o r 1 this problem is to investigate whether labels of shapes and the& acquisition reveals any s-trm~$ux of attributes or components of shapes that might be used for representation purposes. Another aspect o f the puzzle of 'representation is the question whether the infomation is to be stored in analog or propositional form, and at what level this transformation f m m analog to propositional form takes place. In general, shape ulS a 3-D compact object M S two aspects: the surface aspect, and the volume aspect. The surface aspect lncludes properties llke concavity, convexity, planarity of surfaces, edges, and corners. The volume aspect distinguishes objects witl-1 holes from those without (topological properties 1, and describes obj edts with respect $ 0 thqir ~;yrrPnetry planes and axes, relative proportions,-etc.
false
[]
[]
null
null
null
null
1978
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
dasgupta-ng-2007-high
https://aclanthology.org/N07-1020
High-Performance, Language-Independent Morphological Segmentation
This paper introduces an unsupervised morphological segmentation algorithm that shows robust performance for four languages with different levels of morphological complexity. In particular, our algorithm outperforms Goldsmith's Linguistica and Creutz and Lagus's Morphessor for English and Bengali, and achieves performance that is comparable to the best results for all three PASCAL evaluation datasets. Improvements arise from (1) the use of relative corpus frequency and suffix level similarity for detecting incorrect morpheme attachments and (2) the induction of orthographic rules and allomorphs for segmenting words where roots exhibit spelling changes during morpheme attachments.
false
[]
[]
null
null
null
null
2007
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
mordido-meinel-2020-mark
https://aclanthology.org/2020.coling-main.178
Mark-Evaluate: Assessing Language Generation using Population Estimation Methods
We propose a family of metrics to assess language generation derived from population estimation methods widely used in ecology. More specifically, we use mark-recapture and maximumlikelihood methods that have been applied over the past several decades to estimate the size of closed populations in the wild. We propose three novel metrics: ME Petersen and ME CAPTURE , which retrieve a single-valued assessment, and ME Schnabel which returns a double-valued metric to assess the evaluation set in terms of quality and diversity, separately. In synthetic experiments, our family of methods is sensitive to drops in quality and diversity. Moreover, our methods show a higher correlation to human evaluation than existing metrics on several challenging tasks, namely unconditional language generation, machine translation, and text summarization.
false
[]
[]
null
null
null
null
2020
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
popescu-2009-name
https://aclanthology.org/N09-2039
Name Perplexity
The accuracy of a Cross Document Coreference system depends on the amount of context available, which is a parameter that varies greatly from corpora to corpora. This paper presents a statistical model for computing name perplexity classes. For each perplexity class, the prior probability of coreference is estimated. The amount of context required for coreference is controlled by the prior coreference probability. We show that the prior probability coreference is an important factor for maintaining a good balance between precision and recall for cross document coreference systems.
false
[]
[]
null
null
null
null
2009
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false