ID
stringlengths 11
54
| url
stringlengths 33
64
| title
stringlengths 11
184
| abstract
stringlengths 17
3.87k
⌀ | label_nlp4sg
bool 2
classes | task
list | method
list | goal1
stringclasses 9
values | goal2
stringclasses 9
values | goal3
stringclasses 1
value | acknowledgments
stringlengths 28
1.28k
⌀ | year
stringlengths 4
4
| sdg1
bool 1
class | sdg2
bool 1
class | sdg3
bool 2
classes | sdg4
bool 2
classes | sdg5
bool 2
classes | sdg6
bool 1
class | sdg7
bool 1
class | sdg8
bool 2
classes | sdg9
bool 2
classes | sdg10
bool 2
classes | sdg11
bool 2
classes | sdg12
bool 1
class | sdg13
bool 2
classes | sdg14
bool 1
class | sdg15
bool 1
class | sdg16
bool 2
classes | sdg17
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vassileva-etal-2021-automatic-transformation
|
https://aclanthology.org/2021.ranlp-srw.30
|
Automatic Transformation of Clinical Narratives into Structured Format
|
Vast amounts of data in healthcare are available in unstructured text format, usually in the local language of the countries. These documents contain valuable information. Secondary use of clinical narratives and information extraction of key facts and relations from them about the patient disease history can foster preventive medicine and improve healthcare. In this paper, we propose a hybrid method for the automatic transformation of clinical text into a structured format. The documents are automatically sectioned into the following parts: diagnosis, patient history, patient status, lab results. For the "Diagnosis" section a deep learning text-based encoding into ICD-10 codes is applied using MBG-ClinicalBERT-a fine-tuned ClinicalBERT model for Bulgarian medical text. From the "Patient History" section, we identify patient symptoms using a rule-based approach enhanced with similarity search based on MBG-ClinicalBERT word embeddings. We also identify symptom relations like negation. For the "Patient Status" description, binary classification is used to determine the status of each anatomic organ. In this paper, we demonstrate different methods for adapting NLP tools for English and other languages to a low resource language like Bulgarian.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
This research is funded by the Bulgarian Ministry of Education and Science, grant DO1-200/2018 'Electronic health care in Bulgaria' (e-Zdrave).Also is partially funded via GATE project by the EU Horizon 2020 WIDESPREAD-2018-2020 TEAMING Phase 2 under GA No. 857155 and OP SE4SG under GA No. BG05M2OP001-1.003-0002-C01.
|
2021
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
che-etal-2009-multilingual
|
https://aclanthology.org/W09-1207
|
Multilingual Dependency-based Syntactic and Semantic Parsing
|
Our CoNLL 2009 Shared Task system includes three cascaded components: syntactic parsing, predicate classification, and semantic role labeling. A pseudo-projective high-order graph-based model is used in our syntactic dependency parser. A support vector machine (SVM) model is used to classify predicate senses. Semantic role labeling is achieved using maximum entropy (MaxEnt) model based semantic role classification and integer linear programming (ILP) based post inference. Finally, we win the first place in the joint task, including both the closed and open challenges.
| false |
[] |
[] | null | null | null |
This work was supported by National Natural Science Foundation of China (NSFC) via grant 60803093, 60675034, and the "863" National High-Tech Research and Development of China via grant 2008AA01Z144.
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
minkov-zettlemoyer-2012-discriminative
|
https://aclanthology.org/P12-1089
|
Discriminative Learning for Joint Template Filling
|
This paper presents a joint model for template filling, where the goal is to automatically specify the fields of target relations such as seminar announcements or corporate acquisition events. The approach models mention detection, unification and field extraction in a flexible, feature-rich model that allows for joint modeling of interdependencies at all levels and across fields. Such an approach can, for example, learn likely event durations and the fact that start times should come before end times. While the joint inference space is large, we demonstrate effective learning with a Perceptron-style approach that uses simple, greedy beam decoding. Empirical results in two benchmark domains demonstrate consistently strong performance on both mention detection and template filling tasks.
| false |
[] |
[] | null | null | null | null |
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
jones-etal-2014-finding
|
https://aclanthology.org/C14-1044
|
Finding Zelig in Text: A Measure for Normalising Linguistic Accommodation
|
Linguistic accommodation is a recognised indicator of social power and social distance. However, different individuals will vary their language to different degrees, and only a portion of this variance will be due to accommodation. This paper presents the Zelig Quotient, a method of normalising linguistic variation towards a particular individual, using an author's other communications as a baseline, thence to derive a method for identifying accommodation-induced variation with statistical significance. This work provides a platform for future efforts towards examining the importance of such phenomena in large communications datasets.
| false |
[] |
[] | null | null | null | null |
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rajalakshmi-etal-2022-dlrg
|
https://aclanthology.org/2022.dravidianlangtech-1.32
|
DLRG@DravidianLangTech-ACL2022: Abusive Comment Detection in Tamil using Multilingual Transformer Models
|
Online Social Network has let people connect and interact with each other. It does, however, also provide a platform for online abusers to propagate abusive content. The majority of these abusive remarks are written in a multilingual style, which allows them to easily slip past internet inspection. This paper presents a system developed for the Shared Task on Abusive Comment Detection (Misogyny, Misandry, Homophobia, Transphobic, Xenophobia, Coun-terSpeech, Hope Speech) in Tamil Dravidi-anLangTech@ACL 2022 to detect the abusive category of each comment. We approach the task with three methodologies-Machine Learning, Deep Learning and Transformerbased modeling, for two sets of data-Tamil and Tamil+English language dataset. The dataset used in our system can be accessed from the competition on CodaLab. For Machine Learning, eight algorithms were implemented, among which Random Forest gave the best result with Tamil+English dataset, with a weighted average F1-score of 0.78. For Deep Learning, Bi-Directional LSTM gave best result with pre-trained word embeddings. In Transformer-based modeling, we used In-dicBERT and mBERT with fine-tuning, among which mBERT gave the best result for Tamil dataset with a weighted average F1-score of 0.7.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null |
We would like to thank the management of Vellore Institute of Technology, Chennai for their support to carry out this research.
|
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
szubert-steedman-2019-node
|
https://aclanthology.org/D19-5321
|
Node Embeddings for Graph Merging: Case of Knowledge Graph Construction
|
Combining two graphs requires merging the nodes which are counterparts of each other. In this process errors occur, resulting in incorrect merging or incorrect failure to merge. We find a high prevalence of such errors when using AskNET, an algorithm for building Knowledge Graphs from text corpora. AskNET node matching method uses string similarity, which we propose to replace with vector embedding similarity. We explore graph-based and wordbased embedding models and show an overall error reduction of from 56% to 23.6%, with a reduction of over a half in both types of incorrect node matching.
| false |
[] |
[] | null | null | null | null |
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
beinborn-etal-2013-cognate
|
https://aclanthology.org/I13-1112
|
Cognate Production using Character-based Machine Translation
|
Cognates are words in different languages that are associated with each other by language learners. Thus, cognates are important indicators for the prediction of the perceived difficulty of a text. We introduce a method for automatic cognate production using character-based machine translation. We show that our approach is able to learn production patterns from noisy training data and that it works for a wide range of language pairs. It even works across different alphabets, e.g. we obtain good results on the tested language pairs English-Russian, English-Greek, and English-Farsi. Our method performs significantly better than similarity measures used in previous work on cognates.
| false |
[] |
[] | null | null | null |
This work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806, and by the Klaus Tschira Foundation under project No. 00.133.2008.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
khokhlova-zakharov-2010-studying
|
http://www.lrec-conf.org/proceedings/lrec2010/pdf/21_Paper.pdf
|
Studying Word Sketches for Russian
|
Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a word's collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc.
| false |
[] |
[] | null | null | null | null |
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chauhan-etal-2020-one
|
https://aclanthology.org/2020.aacl-main.31
|
All-in-One: A Deep Attentive Multi-task Learning Framework for Humour, Sarcasm, Offensive, Motivation, and Sentiment on Memes
|
In this paper, we aim at learning the relationships and similarities of a variety of tasks, such as humour detection, sarcasm detection, offensive content detection, motivational content detection and sentiment analysis on a somewhat complicated form of information, i.e., memes. We propose a multi-task, multi-modal deep learning framework to solve multiple tasks simultaneously. For multi-tasking, we propose two attention-like mechanisms viz., Inter-task Relationship Module (iTRM) and Inter-class Relationship Module (iCRM). The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other. In contrast, iCRM develops relations between the different classes of tasks. Finally, representations from both the attentions are concatenated and shared across the five tasks (i.e., humour, sarcasm, offensive, motivational, and sentiment) for multi-tasking. We use the recently released dataset in the Memotion Analysis task @ SemEval 2020, which consists of memes annotated for the classes as mentioned above. Empirical results on Memotion dataset show the efficacy of our proposed approach over the existing state-of-theart systems (Baseline and SemEval 2020 winner). The evaluation also indicates that the proposed multi-task framework yields better performance over the single-task learning.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null |
The research reported here is partially supported by SkyMap Global India Private Limited. Dushyant Singh Chauhan acknowledges the support of Prime Minister Research Fellowship (PMRF), Govt. of India. Asif Ekbal acknowledges the Young Faculty Research Fellowship (YFRF), supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (Meit/8Y), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia).
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
lavie-2011-evaluating
|
https://aclanthology.org/2011.mtsummit-tutorials.3
|
Evaluating the Output of Machine Translation Systems
|
Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of
| false |
[] |
[] | null | null | null | null |
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ouyang-etal-2009-integrated
|
https://aclanthology.org/P09-2029
|
An Integrated Multi-document Summarization Approach based on Word Hierarchical Representation
|
This paper introduces a novel hierarchical summarization approach for automatic multidocument summarization. By creating a hierarchical representation of the words in the input document set, the proposed approach is able to incorporate various objectives of multidocument summarization through an integrated framework. The evaluation is conducted on the DUC 2007 data set.
| false |
[] |
[] | null | null | null |
The work described in this paper was partially supported by Hong Kong RGC Projects (No. PolyU 5217/07E) and partially supported by The Hong Kong Polytechnic University internal grants (A-PA6L and G-YG80).
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
williams-koehn-2014-syntax
|
https://aclanthology.org/D14-2005
|
Syntax-Based Statistical Machine Translation
| null | false |
[] |
[] | null | null | null | null |
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
eisenstein-2013-phonological
|
https://aclanthology.org/W13-1102
|
Phonological Factors in Social Media Writing
|
Does phonological variation get transcribed into social media text? This paper investigates examples of the phonological variable of consonant cluster reduction in Twitter. Not only does this variable appear frequently, but it displays the same sensitivity to linguistic context as in spoken language. This suggests that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions. Rather, social media displays influence from structural properties of the phonological system.
| false |
[] |
[] | null | null | null |
Thanks to Brendan O'Connor for building the Twitter dataset that made this research possible. Thanks to the reviewers for their helpful comments.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
paetzold-specia-2016-simplenets
|
https://aclanthology.org/W16-2388
|
SimpleNets: Quality Estimation with Resource-Light Neural Networks
|
We introduce SimpleNets: a resource-light solution to the sentence-level Quality Estimation task of WMT16 that combines Recurrent Neural Networks, word embedding models, and the principle of compositionality. The SimpleNets systems explore the idea that the quality of a translation can be derived from the quality of its n-grams. This approach has been successfully employed in Text Simplification quality assessment in the past. Our experiments show that, surprisingly, our models can learn more about a translation's quality by focusing on the original sentence, rather than on the translation itself.
| false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bladier-etal-2018-german
|
https://aclanthology.org/P18-3009
|
German and French Neural Supertagging Experiments for LTAG Parsing
|
We present ongoing work on data-driven parsing of German and French with Lexicalized Tree Adjoining Grammars. We use a supertagging approach combined with deep learning. We show the challenges of extracting LTAG supertags from the French Treebank, introduce the use of leftand right-sister-adjunction, present a neural architecture for the supertagger, and report experiments of n-best supertagging for French and German.
| false |
[] |
[] | null | null | null |
This work was carried out as a part of the research project TREEGRASP (http://treegrasp.phil. hhu.de) funded by a Consolidator Grant of the European Research Council (ERC). We thank three anonymous reviewers for their careful reading, valuable suggestions and constructive comments.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
qin-etal-2021-neural
|
https://aclanthology.org/2021.acl-long.456
|
Neural-Symbolic Solver for Math Word Problems with Auxiliary Tasks
|
Previous math word problem solvers following the encoder-decoder paradigm fail to explicitly incorporate essential math symbolic constraints, leading to unexplainable and unreasonable predictions. Herein, we propose Neural-Symbolic Solver (NS-Solver) to explicitly and seamlessly incorporate different levels of symbolic constraints by auxiliary tasks. Our NS-Solver consists of a problem reader to encode problems, a programmer to generate symbolic equations, and a symbolic executor to obtain answers. Along with target expression supervision, our solver is also optimized via 4 new auxiliary objectives to enforce different symbolic reasoning: a) self-supervised number prediction task predicting both number quantity and number locations; b) commonsense constant prediction task predicting what prior knowledge (e.g. how many legs a chicken has) is required; c) program consistency checker computing the semantic loss between predicted equation and target equation to ensure reasonable equation mapping; d) duality exploiting task exploiting the quasi duality between symbolic equation generation and problem's part-of-speech generation to enhance the understanding ability of a solver. Besides, to provide a more realistic and challenging benchmark for developing a universal and scalable solver, we also construct a new largescale MWP benchmark CM17K consisting of 4 kinds of MWPs (arithmetic, one-unknown linear, one-unknown non-linear, equation set) with more than 17K samples. Extensive experiments on Math23K and our CM17k demonstrate the superiority of our NS-Solver compared to state-of-the-art methods 1 .
Deep neural networks have achieved remarkable successes in natural language processing recently. Although neural models have demonstrated performance superior to humans on some tasks, e.g. reading comprehension (Rajpurkar et al., 2016; Devlin et al., 2019; Lan et al.) , it still lacks the ability of discrete reasoning, resulting in low accuracy on math reasoning. Thus, it is hard for pure neural network approaches to tackle the task of solving math word problems (MWPs) , which requires a model to be capable of natural language understanding and discrete reasoning. MWP solving aims to automatically answer a math word problem by understanding the textual description of the problem and reasoning out the underlying answer. A typical MWP is a short story that describes a partial state of the world and poses a question about an unknown quantity or multiple unknown quantities. To solve an MWP, the relevant quantities need to be identified from the text. Furthermore, the correct operators along with their computation order among these quantities need to be determined. Therefore, integrating neural networks with symbolic reasoning is crucial for solving MWPs. Inspired by the recent amazing progress on neural semantic parsing (Liang et al., 2017a) and reading comprehension , we address this problem by neural-symbolic computing.
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
aziz-specia-2010-combining
|
https://aclanthology.org/S10-1024
|
Combining Dictionaries and Contextual Information for Cross-Lingual Lexical Substitution
|
We describe two systems participating in Semeval-2010's Cross-Lingual Lexical Substitution task: USPwlv and WLVusp. Both systems are based on two main components: (i) a dictionary to provide a number of possible translations for each source word, and (ii) a contextual model to select the best translation according to the context where the source word occurs. These components and the way they are integrated are different in the two systems: they exploit corpus-based and linguistic resources, and supervised and unsupervised learning methods. Among the 14 participants in the subtask to identify the best translation, our systems were ranked 2nd and 4th in terms of recall, 3rd and 4th in terms of precision. Both systems outperformed the baselines in all subtasks according to all metrics used.
| false |
[] |
[] | null | null | null | null |
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
danlos-2005-automatic
|
https://aclanthology.org/I05-2013
|
Automatic Recognition of French Expletive Pronoun Occurrences
|
We present a tool, called ILIMP, which takes as input a raw text in French and produces as output the same text in which every occurrence of the pronoun il is tagged either with tag [ANA] for anaphoric or [IMP] for impersonal or expletive. This tool is therefore designed to distinguish between the anaphoric occurrences of il, for which an anaphora resolution system has to look for an antecedent, and the expletive occurrences of this pronoun, for which it does not make sense to look for an antecedent. The precision rate for ILIMP is 97,5%. The few errors are analyzed in detail. Other tasks using the method developed for ILIMP are described briefly, as well as the use of ILIMP in a modular syntactic analysis system.
| false |
[] |
[] | null | null | null | null |
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
johnson-charniak-2004-tag
|
https://aclanthology.org/P04-1005
|
A TAG-based noisy-channel model of speech repairs
|
This paper describes a noisy channel model of speech repairs, which can identify and correct repairs in speech transcripts. A syntactic parser is used as the source model, and a novel type of TAG-based transducer is the channel model. The use of TAG is motivated by the intuition that the reparandum is a "rough copy" of the repair. The model is trained and tested on the Switchboard disfluency-annotated corpus.
| false |
[] |
[] | null | null | null | null |
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sagot-boullier-2006-deep
|
http://www.lrec-conf.org/proceedings/lrec2006/pdf/806_pdf.pdf
|
Deep non-probabilistic parsing of large corpora
|
This paper reports a large-scale non-probabilistic parsing experiment with a deep LFG parser. We briefly introduce the parser we used, named SXLFG, and the resources that were used together with it. Then we report quantitative results about the parsing of a multi-million word journalistic corpus. We show that we can parse more than 6 million words in less than 12 hours, only 6.7% of all sentences reaching the 1s timeout. This shows that deep large-coverage non-probabilistic parsers can be efficient enough to parse very large corpora in a reasonable amount of time.
| false |
[] |
[] | null | null | null | null |
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chen-etal-2022-focus
|
https://aclanthology.org/2022.acl-short.74
|
Focus on the Target's Vocabulary: Masked Label Smoothing for Machine Translation
|
Label smoothing and vocabulary sharing are two widely used techniques in neural machine translation models. However, we argue that simply applying both techniques can be conflicting and even leads to sub-optimal performance. When allocating smoothed probability, original label smoothing treats the source-side words that would never appear in the target language equally to the real target-side words, which could bias the translation model. To address this issue, we propose Masked Label Smoothing (MLS), a new mechanism that masks the soft label probability of source-side words to zero. Simple yet effective, MLS manages to better integrate label smoothing with vocabulary sharing. Our extensive experiments show that MLS consistently yields improvement over original label smoothing on different datasets, including bilingual and multilingual translation from both translation quality and model's calibration. Our code is released at PKUnlp-icler.
| false |
[] |
[] | null | null | null |
We thank all reviewers for their valuable suggestions for this work. This paper is supported by the
|
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gottwald-etal-2008-tapping
|
http://www.lrec-conf.org/proceedings/lrec2008/pdf/117_paper.pdf
|
Tapping Huge Temporally Indexed Textual Resources with WCTAnalyze
|
WCTAnalyze is a tool for storing, accessing and visually analyzing huge collections of temporally indexed data. It is motivated by applications in media analysis, business intelligence etc. where higher level analysis is performed on top of linguistically and statistically processed unstructured textual data. WCTAnalyze combines fast access with economically storage behaviour and appropriates a lot of built in visualization options for result presentation in detail as well as in contrast. So it enables an efficient and effective way to explore chronological text patterns of word forms, their co-occurrence sets and co-occurrence set intersections. Digging deep into co-occurrences of the same semantic or syntactic describing wordforms, some entities can be recognized as to be temporal related, whereas other differ significantly. This behaviour motivates approaches in interactive discovering events based on co-occurrence subsets.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
fong-berwick-1992-isolating
|
https://aclanthology.org/C92-2095
|
Isolating Cross-linguistic Parsing Complexity with a Principles-and-Parameters Parser: A Case Study of Japanese and English
|
As parsing models and linguistic theories have broad ened to encorapass a wider range of non-English languages, a particularly uscfifl "stress test" is to buikl a single theory/parser pair that can work for multiple languages, in the best case with minor variation, perhaps restricted to the lexicon. This paper reports on the resuits of just such a test applied to a fully operational (Prolog) implementation of a so-called principles-andparameters model of syntax, for the case of Japanese and English. This paper has two basic aims: (1) to show how an implemented model tbr an entire principles-andparameters model (essentially all of the linguistic theory in Lasnik & Uriagereka (1988) ), see figure 2 for a computer snapshot, leads directly to both a parser for nmltiple languages and a useful "computational linguistics workbench" in which one can easily experiment with alternative linguistic theoretical tormulations of grammarital principles as well as alternative computational strategies;
| false |
[] |
[] | null | null | null | null |
1992
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chanen-patrick-2004-complex
|
https://aclanthology.org/U04-1001
|
Complex, Corpus-Driven, Syntactic Features for Word Sense Disambiguation
|
Although syntactic features offer more specific information about the context surrounding a target word in a Word Sense Disambiguation (WSD) task, in general, they have not distinguished themselves much above positional features such as bag-of-words. In this paper we offer two methods for increasing the recall rate when using syntactic features on the WSD task by: 1) using an algorithm for discovering in the corpus every possible syntactic feature involving a target word, and 2) using wildcards in place of the lemmas in the templates of the syntactic features. In the best experimental results on the SENSEVAL-2 data we achieved an Fmeasure of 53.1% which is well above the mean F-measure performance of official SENSEVAL-2 entries, of 44.2%. These results are encouraging considering that only one kind of feature is used and only a simple Support Vector Machine (SVM) running with the defaults is used for the machine learning.
| false |
[] |
[] | null | null | null |
The word sense disambiguation architecture was jointly constructed with David Bell. We would like to thank the Capital Markets CRC and the University of Sydney for financial supported and everyone in the Sydney Language Technology Research Group for their support.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
li-etal-2021-recommend-reason
|
https://aclanthology.org/2021.findings-emnlp.66
|
Recommend for a Reason: Unlocking the Power of Unsupervised Aspect-Sentiment Co-Extraction
|
Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing reviewbased recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user-attention and item-property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidences. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines.
| false |
[] |
[] | null | null | null |
We would like to thank the reviewers for their helpful comments. The work was partially supported by NSF DGE-1829071 and NSF IIS-2106859.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2021-chemner
|
https://aclanthology.org/2021.emnlp-main.424
|
ChemNER: Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided Distant Supervision
|
Scientific literature analysis needs fine-grained named entity recognition (NER) to provide a wide range of information for scientific discovery. For example, chemistry research needs to study dozens to hundreds of distinct, fine-grained entity types, making consistent and accurate annotation difficult even for crowds of domain experts. On the other hand, domain-specific ontologies and knowledge bases (KBs) can be easily accessed, constructed, or integrated, which makes distant supervision realistic for fine-grained chemistry NER. In distant supervision, training labels are generated by matching mentions in a document with the concepts in the knowledge bases (KBs). However, this kind of KB-matching suffers from two major challenges: incomplete annotation and noisy annotation. We propose CHEMNER, an ontologyguided, distantly-supervised method for finegrained chemistry NER to tackle these challenges. It leverages the chemistry type ontology structure to generate distant labels with novel methods of flexible KB-matching and ontology-guided multi-type disambiguation. It significantly improves the distant label generation for the subsequent sequence labeling model training. We also provide an expertlabeled, chemistry NER dataset with 62 finegrained chemistry types (e.g., chemical compounds and chemical reactions). Experimental results show that CHEMNER is highly effective, outperforming substantially the stateof-the-art NER methods (with .25 absolute F1 score improvement).
| true |
[] |
[] |
Industry, Innovation and Infrastructure
| null | null |
This work was supported by the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, US DARPA KAIROS Program No. FA8750-19-2-1004, SocialSim Program No. W911NF-17-C-0099, and INCAS Program No. HR001121C0165, and National Science Foundation IIS-19-56151, IIS-17-41317, and IIS 17-04532. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science Foundation, DARPA or the U.S. Government.
|
2021
| false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
chang-etal-2017-novel-trajectory
|
https://aclanthology.org/O17-3008
|
A Novel Trajectory-based Spatial-Temporal Spectral Features for Speech Emotion Recognition
|
Speech is one of the most natural form of human communication. Recognizing emotion from speech continues to be an important research venue to advance human-machine interface design and human behavior understanding. In this work, we propose a novel set of features, termed trajectory-based spatial-temporal spectral features, to recognize emotions from speech. The core idea centers on deriving descriptors both spatially and temporally on speech spectrograms over a sub-utterance frame (e.g., 250ms)-an inspiration from dense trajectory-based video descriptors. We conduct categorical and dimensional emotion recognition experiments and compare our proposed features to both the well-established set of prosodic and spectral features and the state-of-the-art exhaustive feature extraction. Our experiment demonstrate that our features by itself achieves comparable accuracies in the 4-class emotion recognition and valence detection task, and it obtains a significant improvement in the activation detection. We additionally show that there exists complementary information in our proposed features to the existing acoustic features set, which can be used to obtain an improved emotion recognition accuracy.
| false |
[] |
[] | null | null | null |
Thanks to Ministry of Science and Technology (103-2218-E-007-012-MY3) for funding.
|
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nandi-etal-2013-towards
|
https://aclanthology.org/W13-3725
|
Towards Building Parallel Dependency Treebanks: Intra-Chunk Expansion and Alignment for English Dependency Treebank
|
The paper presents our work on the annotation of intra-chunk dependencies on an English treebank that was previously annotated with Inter-chunk dependencies, and for which there exists a fully expanded parallel Hindi dependency treebank. This provides fully parsed dependency trees for the English treebank. We also report an analysis of the inter-annotator agreement for this chunk expansion task. Further, these fully expanded parallel Hindi and English treebanks were word aligned and an analysis for the task has been given. Issues related to intra-chunk expansion and alignment for the language pair Hindi-English are discussed and guidelines for these tasks have been prepared and released.
| false |
[] |
[] | null | null | null |
We gratefully acknowledge the provision of the useful resource by way of the Hindi Treebank developed under HUTB, of which the Hindi treebank used for our research purpose is a part, and the work for which is supported by the NSF grant (Award Number: CNS 0751202; CFDA Number: 47.070). Also, any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
xia-zong-2011-pos
|
https://aclanthology.org/I11-1069
|
A POS-based Ensemble Model for Cross-domain Sentiment Classification
|
In this paper, we focus on the tasks of cross-domain sentiment classification. We find across different domains, features with some types of part-of-speech (POS) tags are domain-dependent, while some others are domain-free. Based on this finding, we proposed a POS-based ensemble model to efficiently integrate features with different types of POS tags to improve the classification performance. Weights are trained by stochastic gradient descent (SGD) to optimize the perceptron and minimal classification error (MCE) criteria. Experimental results show that the proposed ensemble model is quite effective for the task of cross-domain sentiment classification.
| false |
[] |
[] | null | null | null |
The research work has been funded by the Natural Science Foundation of China under Grant No. 60975053 and 61003160, and supported by the External Cooperation Program of the Chinese Academy of Sciences.
|
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
pulman-sukkarieh-2005-automatic
|
https://aclanthology.org/W05-0202
|
Automatic Short Answer Marking
|
Our aim is to investigate computational linguistics (CL) techniques in marking short free text responses automatically. Successful automatic marking of free text answers would seem to presuppose an advanced level of performance in automated natural language understanding. However, recent advances in CL techniques have opened up the possibility of being able to automate the marking of free text responses typed into a computer without having to create systems that fully understand the answers. This paper describes some of the techniques we have tried so far vis-à-vis this problem with results, discussion and description of the main issues encountered.
| false |
[] |
[] | null | null | null | null |
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tanaka-ishii-etal-1998-reactive
|
https://aclanthology.org/C98-2204
|
Reactive Content Selection in the Generation of Real-time Soccer Commentary
|
MIKE is an automatic commentary system that generates a commentary of a simulated soccer game in English, French, or Japanese. One of the major technical challenges involved in live sports commentary is the reactive selection of content to describe complex, rapidly unfolding situation. To address this challenge, MIKE employs importance scores that intuitively capture the amount of information communicated to the audience. We describe how a principle of maximizing the total gain of importance scores during a game can be used to incorporate content selection into the surface generation module, thus accounting for issues such as interruption and abbreviation. Sample commentaries produced by MIKE are presented and used to evaluate different methods for content selection and generation in terms of efficiency of communication.
| false |
[] |
[] | null | null | null | null |
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhang-etal-2012-lazy
|
https://aclanthology.org/C12-1189
|
A Lazy Learning Model for Entity Linking using Query-Specific Information
|
Entity linking disambiguates a mention of an entity in text to a Knowledge Base (KB). Most previous studies disambiguate a mention of a name (e.g."AZ") based on the distribution knowledge learned from labeled instances, which are related to other names (e.g."Hoffman","Chad Johnson", etc.). The gaps among the distributions of the instances related to different names hinder the further improvement of the previous approaches. This paper proposes a lazy learning model, which allows us to improve the learning process with the distribution information specific to the queried name (e.g."AZ"). To obtain this distribution information, we automatically label some relevant instances for the queried name leveraging its unambiguous synonyms. Besides, another advantage is that our approach still can benefit from the labeled data related to other names (e.g."Hoffman","Chad Johnson", etc.), because our model is trained on both the labeled data sets of queried and other names by mining their shared predictive structure.
| false |
[] |
[] | null | null | null |
This work is partially supported by Microsoft Research Asia eHealth Theme Program.
|
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rajendran-etal-2018-sentiment
|
https://aclanthology.org/L18-1099
|
Sentiment-Stance-Specificity (SSS) Dataset: Identifying Support-based Entailment among Opinions.
|
Computational argumentation aims to model arguments as a set of premises that either support each other or collectively support a conclusion. We prepare three datasets of text-hypothesis pairs with support-based entailment based on opinions present in hotel reviews using a distant supervision approach. Support-based entailment is defined as the existence of a specific opinion (premise) that supports as well as entails a more general opinion and where these together support a generalised conclusion. A set of rules is proposed based on three different components-sentiment, stance and specificity to automatically predict support-based entailment. Two annotators manually annotated the relations among text-hypothesis pairs with an inter-rater agreement of 0.80. We compare the performance of the rules which gave an overall accuracy of 0.83. Further, we compare the performance of textual entailment under various conditions. The overall accuracy was 89.54%, 90.00% and 96.19% for our three datasets.
| false |
[] |
[] | null | null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gatt-etal-2009-tuna
|
https://aclanthology.org/W09-0629
|
The TUNA-REG Challenge 2009: Overview and Evaluation Results
|
The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions using several automatic intrinsic measures, a human-assessed intrinsic evaluation and a human task performance experiment. This report describes the TUNA-REG task and the evaluation methods used, and presents the evaluation results.
| false |
[] |
[] | null | null | null |
We thank our colleagues at the University of Brighton who participated in the identification experiment, and the Masters students at UCL, Sussex and Brighton who participated in the quality assessment experiment. The evaluations were funded by EPSRC (UK) grant EP/G03995X/1.
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
fraisse-etal-2012-context
|
https://aclanthology.org/C12-3018
|
An In-Context and Collaborative Software Localisation Model
|
We propose a demonstration of our in context and collaborative software localisation model. It involves volunteer localisers and end users in the localisation process via an efficient and dynamic workflow: while using an application (in context), users knowing the source language of the application (often but not always English) can modify strings of the user interface presented by the application in their current context. The implementation of that approach to localisation requires the integration of a collaborative platform. That leads to a new tripartite localisation workflow. We have experimented with our approach on Notepad++. A demonstration video is proposed as a supplementary material.
| false |
[] |
[] | null | null | null | null |
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
utsuro-etal-1992-lexical
|
https://aclanthology.org/C92-2088
|
Lexical Knowledge Acquisition from Bilingual Corpora
|
I)br practical research in natnral language processing, it is indisl)ensM)le to develop a large scale semantic dictionary for computers. It is cspeciany important to improve thc tcclmiqucs tbr compiling semantic dictionaries ti'orn natural language texts such as those in existing human dictionaries or in large corpora, llowever, there are at least two ditlicultics in analyzing existing texts: tbe l)roblem of syntactic ambiguities and the probtcm of polysemy. Our approaclL to solve these difficulties is to make use of translation exampies in two distinct languages that have (lnite different syntactic structures and word meanings. The roe.son we took this at)preach is that in many cases both syn: tactic aLrd semantic ambignitics arc resolved by comparing analyzed resnlts from botb languages. In this paper, we propose a method Ibr resolving the syntactic ambiguities of translation cxaml>lcs of bilingual corpora and a method for acquiring lexical knowledge, such as ease frames of verbs and attribute sets el noons.
| false |
[] |
[] | null | null | null | null |
1992
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
han-etal-2021-fine
|
https://aclanthology.org/2021.naacl-main.122
|
Fine-grained Post-training for Improving Retrieval-based Dialogue Systems
|
Retrieval-based dialogue systems display an outstanding performance when pre-trained language models are used, which includes bidirectional encoder representations from transformers (BERT). During the multi-turn response selection, BERT focuses on training the relationship between the context with multiple utterances and the response. However, this method of training is insufficient when considering the relations between each utterance in the context. This leads to a problem of not completely understanding the context flow that is required to select a response. To address this issue, we propose a new fine-grained post-training method that reflects the characteristics of the multi-turn dialogue. Specifically, the model learns the utterance level interactions by training every short context-response pair in a dialogue session. Furthermore, by using a new training objective, the utterance relevance classification, the model understands the semantic relevance and coherence between the dialogue utterances. Experimental results show that our model achieves new state-of-the-art with significant margins on three benchmark datasets. This suggests that the fine-grained post-training method is highly effective for the response selection task. 1
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
alfalahi-etal-2015-expanding
|
https://aclanthology.org/W15-2611
|
Expanding a dictionary of marker words for uncertainty and negation using distributional semantics
|
Approaches to determining the factuality of diagnoses and findings in clinical text tend to rely on dictionaries of marker words for uncertainty and negation. Here, a method for semi-automatically expanding a dictionary of marker words using distributional semantics is presented and evaluated. It is shown that ranking candidates for inclusion according to their proximity to cluster centroids of semantically similar seed words is more successful than ranking them according to proximity to each individual seed word.
| false |
[] |
[] | null | null | null |
This work was partly funded through the project StaViCTA by the framework grant "the Digitized Society Past, Present, and Future" with No. 2012-5659 from the Swedish Research Council (Vetenskapsrådet) and partly by the Swedish Foundation for Strategic Research through the project High-Performance Data Mining for Drug Effect at Stockholm University, Sweden. The authors would also like to direct thanks to the reviewers for valuable comments.
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
martens-passarotti-2014-thomas
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/70_Paper.pdf
|
Thomas Aquinas in the T\"uNDRA: Integrating the Index Thomisticus Treebank into CLARIN-D
|
This paper describes the integration of the Index Thomisticus Treebank (IT-TB) into the web-based treebank search and visualization application TüNDRA (Tübingen aNnotated Data Retrieval & Analysis). TüNDRA was originally designed to provide access via the Internet to constituency treebanks and to tools for searching and visualizing them, as well as tabulating statistics about their contents. TüNDRA has now been extended to also provide full support for dependency treebanks with non-projective dependencies, in order to integrate the IT-TB and future treebanks with similar properties. These treebanks are queried using an adapted form of the TIGERSearch query language, which can search both hierarchical and sequential information in treebanks in a single query. As a web application, making the IT-TB accessible via TüNDRA makes the treebank and the tools to use of it available to a large community without having to distribute software and show users how to install it.
| false |
[] |
[] | null | null | null | null |
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tait-etal-1999-mable
|
https://aclanthology.org/1999.tc-1.12
|
MABLe: A Multi-lingual Authoring Tool for Business Letters
|
MABLe allows Spanish or Greek business letters writers with limited English to construct good quality, stylistically well formed British English letters. MABLe is a PC program, based on a domain dependant text grammar of fixed and variable phrases that together enforce linguistic cohesion. Interactions with the system are in the user's own language, and the constructed letter may be viewed in that language for sense checking. Our experience to date has shown that the approach it uses to machine aided translation, gives a sufficiently effective and flexible regime for the construction of genuine finished quality documents in the limited domain of business correspondence.
In this paper, we will first review the application domain in which MABLe is intended to operate. Then we will review the approach to machine assisted translation it embodies. Next, we describe the system, its architecture, and the implementation of the linguistic and programming elements. An example interaction sequence is then presented. Finally the successes and shortcomings of the work will be identified and some directions for possible future work will be identified.
| false |
[] |
[] | null | null | null |
The authors would like to thank our numerous colleagues who worked on the MABLe Project, especially Prof. Dr. Peter Hell wig and Heinz-Detlev Koch of the University of Heidelberg, Dr. Chris Smith of MARI, Periklis Tsahageas of SENA, and our other partners STI, PEA and CES.The MABLe project was supported in part by the EU Framework IV Language Engineering programme under contract LEI203.Microsoft® Word and Access are registered trademarks of Microsoft Corporation.
|
1999
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
fu-etal-2013-exploiting
|
https://aclanthology.org/D13-1122
|
Exploiting Multiple Sources for Open-Domain Hypernym Discovery
|
Hypernym discovery aims to extract such noun pairs that one noun is a hypernym of the other. Most previous methods are based on lexical patterns but perform badly on opendomain data. Other work extracts hypernym relations from encyclopedias but has limited coverage. This paper proposes a simple yet effective distant supervision framework for Chinese open-domain hypernym discovery. Given an entity name, we try to discover its hypernyms by leveraging knowledge from multiple sources, i.e., search engine results, encyclopedias, and morphology of the entity name. First, we extract candidate hypernyms from the above sources. Then, we apply a statistical ranking model to select correct hypernyms. A set of novel features is proposed for the ranking model. We also present a heuristic strategy to build a large-scale noisy training data for the model without human annotation. Experimental results demonstrate that our approach outperforms the state-of-the-art methods on a manually labeled test dataset.
| false |
[] |
[] | null | null | null |
This work was supported by National Natural Science Foundation of China (NSFC) via grant 61133012, 61073126 and the National 863 Leading Technology Research Project via grant 2012AA011102. Special thanks to Zhenghua Li, Wanxiang Che, Wei Song, Yanyan Zhao, Yuhang Guo and the anonymous reviewers for insightful comments and suggestions. Thanks are also due to our annotators Ni Han and Zhenghua Li.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tomlinson-etal-2014-mygoal
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/1120_Paper.pdf
|
\#mygoal: Finding Motivations on Twitter
|
Our everyday language reflects our psychological and cognitive state and effects the states of other individuals. In this contribution we look at the intersection between motivational state and language. We create a set of hashtags, which are annotated for the degree to which they are used by individuals to markup language that is indicative of a collection of factors that interact with an individual's motivational state. We look for tags that reflect a goal mention, reward, or a perception of control. Finally, we present results for a language-model based classifier which is able to predict the presence of one of these factors in a tweet with between 69% and 80% accuracy on a balanced testing set. Our approach suggests that hashtags can be used to understand, not just the language of topics, but the deeper psychological and social meaning of a tweet.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
This research was funded by the Intelligence Advanced Research Projects Activity (IARPA) through the Department of Defense US Army Research Laboratory (DoD / ARL). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
|
2014
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
taslimipoor-etal-2020-mtlb
|
https://aclanthology.org/2020.mwe-1.19
|
MTLB-STRUCT @Parseme 2020: Capturing Unseen Multiword Expressions Using Multi-task Learning and Pre-trained Masked Language Models
|
This paper describes a semi-supervised system that jointly learns verbal multiword expressions (VMWEs) and dependency parse trees as an auxiliary task. The model benefits from pre-trained multilingual BERT. BERT hidden layers are shared among the two tasks and we introduce an additional linear layer to retrieve VMWE tags. The dependency parse tree prediction is modelled by a linear layer and a bilinear one plus a tree CRF on top of BERT. The system has participated in the open track of the PARSEME shared task 2020 and ranked first in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general, averaged across all 14 languages.
| false |
[] |
[] | null | null | null |
This paper reports on research supported by Cambridge Assessment, University of Cambridge. We are grateful to the anonymous reviewers for their valuable feedback. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used in this research.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
stojanovski-fraser-2018-coreference
|
https://aclanthology.org/W18-6306
|
Coreference and Coherence in Neural Machine Translation: A Study Using Oracle Experiments
|
Cross-sentence context can provide valuable information in Machine Translation and is critical for translation of anaphoric pronouns and for providing consistent translations. In this paper, we devise simple oracle experiments targeting coreference and coherence. Oracles are an easy way to evaluate the effect of different discourse-level phenomena in NMT using BLEU and eliminate the necessity to manually define challenge sets for this purpose. We propose two context-aware NMT models and compare them against models working on a concatenation of consecutive sentences. Concatenation models perform better, but are computationally expensive. We show that NMT models taking advantage of context oracle signals can achieve considerable gains in BLEU, of up to 7.02 BLEU for coreference and 1.89 BLEU for coherence on subtitles translation. Access to strong signals allows us to make clear comparisons between context-aware models.
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their valuable input and Daniel Ledda for his help with examples. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement № 640550).
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhou-liu-1997-similarity
|
https://aclanthology.org/O97-2011
|
Similarity Comparison between Chinese Sentences
| null | false |
[] |
[] | null | null | null | null |
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
asthana-ekbal-2017-supervised
|
https://aclanthology.org/W17-7529
|
Supervised Methods For Ranking Relations In Web Search
|
In this paper we propose an efficient technique for ranking triples of knowledge base using information of full text. We devise supervised machine learning algorithms to compute the relevance scores for item-property pairs where an item can have more than one value.Such a score measures the degree to which an entity belongs to a type, and this plays an important role in ranking the search results. The problem is, in itself, new and not explored so much in the literature, possibly because of the heterogeneous behaviors of both semantic knowledge base and fulltext articles. The classifiers exploit statistical features computed from the Wikipedia articles and the semantic information obtained from the word embedding concepts. We develop models based on traditional supervised models like Suport Vector Machine (SVM) and Random Forest (RF); and then using deep Convolution Neural Network (CNN). We perform experiments as provided by WSDM cup 2017, which provides about 1k human judgments of person-profession pairs. Evaluation shows that machine learning based approaches produce encouraging performance with the highest accuracy of 71%. The contributions of the current work are twofold , viz. we focus on a problem that has not been explored much, and show the usage of powerful word-embedding features that produce promising results.
| false |
[] |
[] | null | null | null | null |
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zheng-etal-2021-consistency
|
https://aclanthology.org/2021.acl-long.264
|
Consistency Regularization for Cross-Lingual Fine-Tuning
|
Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual finetuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to four types of data augmentations, i.e., subword sampling, Gaussian noise, code-switch substitution, and machine translation. In addition, we employ model consistency to regularize the models trained with two augmented versions of the same training set. Experimental results on the XTREME benchmark show that our method 1 significantly improves crosslingual fine-tuning across various tasks, including text classification, question answering, and sequence labeling.
| false |
[] |
[] | null | null | null |
Wanxiang Che is the corresponding author. This work was supported by the National Key R&D Program of China via grant 2020AAA0106501 and the National Natural Science Foundation of China (NSFC) via grant 61976072 and 61772153.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
condon-miller-2002-sharing
|
https://aclanthology.org/W02-0713
|
Sharing Problems and Solutions for Machine Translation of Spoken and Written Interaction
|
Examples from chat interaction are presented to demonstrate that machine translation of written interaction shares many problems with translation of spoken interaction. The potential for common solutions to the problems is illustrated by describing operations that normalize and tag input before translation. Segmenting utterances into small translation units and processing short turns separately are also motivated using data from chat.
| false |
[] |
[] | null | null | null | null |
2002
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nakamura-kawahara-2016-constructing
|
https://aclanthology.org/W16-1006
|
Constructing a Dictionary Describing Feature Changes of Arguments in Event Sentences
|
Common sense knowledge plays an essential role for natural language understanding, human-machine communication and so forth. In this paper, we acquire knowledge of events as common sense knowledge because there is a possibility that dictionaries of such knowledge are useful for recognition of implication relations in texts, inference of human activities and their planning, and so on. As for event knowledge, we focus on feature changes of arguments (hereafter, FCAs) in event sentences as knowledge of events. To construct a dictionary of FCAs, we propose a framework for acquiring such knowledge based on both of the automatic approach and the collective intelligence approach to exploit merits of both approaches. We acquired FCAs in event sentences through crowdsourcing and conducted the subjective evaluation to validate whether the FCAs are adequately acquired. As a result of the evaluation, it was shown that we were able to reasonably well capture FCAs in event sentences.
| false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mohammad-etal-2016-dataset
|
https://aclanthology.org/L16-1623
|
A Dataset for Detecting Stance in Tweets
|
We can often detect from a person's utterances whether he/she is in favor of or against a given target entity (a product, topic, another person, etc.). Here for the first time we present a dataset of tweets annotated for whether the tweeter is in favor of or against pre-chosen targets of interest-their stance. The targets of interest may or may not be referred to in the tweets, and they may or may not be the target of opinion in the tweets. The data pertains to six targets of interest commonly known and debated in the United States. Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet. The annotations were performed by crowdsourcing. Several techniques were employed to encourage high-quality annotations (for example, providing clear and simple instructions) and to identify and discard poor annotations (for example, using a small set of check questions annotated by the authors). This Stance Dataset, which was subsequently also annotated for sentiment, can be used to better understand the relationship between stance, sentiment, entity relationships, and textual inference.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
moghimifar-etal-2020-cosmo
|
https://aclanthology.org/2020.coling-main.467
|
CosMo: Conditional Seq2Seq-based Mixture Model for Zero-Shot Commonsense Question Answering
|
Commonsense reasoning refers to the ability of evaluating a social situation and acting accordingly. Identification of the implicit causes and effects of a social context is the driving capability which can enable machines to perform commonsense reasoning. The dynamic world of social interactions requires context-dependent on-demand systems to infer such underlying information. However, current approaches in this realm lack the ability to perform commonsense reasoning upon facing an unseen situation, mostly due to incapability of identifying a diverse range of implicit social relations. Hence they fail to estimate the correct reasoning path. In this paper, we present Conditional SEQ2SEQ-based Mixture model (COSMO), which provides us with the capabilities of dynamic and diverse content generation. We use COSMO to generate context-dependent clauses, which form a dynamic Knowledge Graph (KG) on-the-fly for commonsense reasoning. To show the adaptability of our model to context-dependant knowledge generation, we address the task of zero-shot commonsense question answering. The empirical results indicate an improvement of up to +5.2% over the state-of-the-art models.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mellish-evans-1989-natural
|
https://aclanthology.org/J89-4002
|
Natural Language Generation from Plans
|
This paper addresses the problem of designing a system that accepts a plan structure of the sort generated by AI planning programs and produces natural language text explaining how to execute the plan. We describe a system that generates text from plans produced by the NONLIN planner (Tate 1976). The results of our system are promising, but the texts still lack much of the smoothness of human-generated text. This is partly because, although the domain of plans seems a priori to provide rich structure that a natural language generator can use, in practice a plan that is generated without the production of explanations in mind rarely contains the kinds of information that would yield an interesting natural language account. For instance, the hierarchical organization assigned to a plan is liable to reflect more a programmer's approach to generating a class of plans efficiently than the way that a human would naturally "chunk" the relevant actions. Such problems are, of course, similar to those that Swartout (1983) encountered with expert systems. In addition, AI planners have a restricted view of the world that is hard to match up with the normal semantics of natural language expressions. Thus constructs that are primitive to the planner may be only clumsily or misleadingly expressed in natural language, and the range of possible natural language constructs may be artificially limited by the shallowness of the planner's representations.
| false |
[] |
[] | null | null | null |
The work reported here was made possible by SERC grant GR/D/ 08876. Both authors are currently supported by SERC Advanced Fellowships.
|
1989
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hinkelman-allen-1989-two
|
https://aclanthology.org/P89-1026
|
Two Constraints on Speech Act Ambiguity
|
Existing plan-based theories of speech act interpretation do not account for the conventional aspect of speech acts. We use patterns of linguistic features (e.g. mood, verb form, sentence adverbials, thematic roles) to suggest a range of speech act interpretations for the utterance. These are filtered using plan-bused conversational implicatures to eliminate inappropriate ones. Extended plan reasoning is available but not necessary for familiar forms. Taking speech act ambiguity seriously, with these two constraints, explains how "Can you pass the salt?" is a typical indirect request while "Are you able to pass the salt?" is not.
| false |
[] |
[] | null | null | null | null |
1989
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
garcia-2021-exploring
|
https://aclanthology.org/2021.acl-long.281
|
Exploring the Representation of Word Meanings in Context: A Case Study on Homonymy and Synonymy
|
This paper presents a multilingual study of word meaning representations in context. We assess the ability of both static and contextualized models to adequately represent different lexical-semantic relations, such as homonymy and synonymy. To do so, we created a new multilingual dataset that allows us to perform a controlled evaluation of several factors such as the impact of the surrounding context or the overlap between words, conveying the same or different senses. A systematic assessment on four scenarios shows that the best monolingual models based on Transformers can adequately disambiguate homonyms in context. However, as they rely heavily on context, these models fail at representing words with different senses when occurring in similar sentences. Experiments are performed in Galician, Portuguese, English, and Spanish, and both the dataset (with more than 3,000 evaluation items) and new models are freely released with this study.
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their valuable comments, and NVIDIA Corporation for the donation of a Titan Xp GPU. This research is funded by a Ramón y Cajal grant (RYC2019-028473-I) and by the Galician Government (ERDF 2014-2020: Call ED431G 2019/04).
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
andersen-etal-2013-developing
|
https://aclanthology.org/W13-1704
|
Developing and testing a self-assessment and tutoring system
|
Automated feedback on writing may be a useful complement to teacher comments in the process of learning a foreign language. This paper presents a self-assessment and tutoring system which combines an holistic score with detection and correction of frequent errors and furthermore provides a qualitative assessment of each individual sentence, thus making the language learner aware of potentially problematic areas rather than providing a panacea. The system has been tested by learners in a range of educational institutions, and their feedback has guided its development.
| true |
[] |
[] |
Quality Education
| null | null |
Special thanks to Ted Briscoe and Marek Rei, as well as to the anonymous reviewers, for their valu-able contributions at various stages.
|
2013
| false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false |
nawaz-etal-2010-evaluating
|
https://aclanthology.org/W10-3112
|
Evaluating a meta-knowledge annotation scheme for bio-events
|
The correct interpretation of biomedical texts by text mining systems requires the recognition of a range of types of high-level information (or meta-knowledge) about the text. Examples include expressions of negation and speculation, as well as pragmatic/rhetorical intent (e.g. whether the information expressed represents a hypothesis, generally accepted knowledge, new experimental knowledge, etc.) Although such types of information have previously been annotated at the text-span level (most commonly sentences), annotation at the level of the event is currently quite sparse. In this paper, we focus on the evaluation of the multi-dimensional annotation scheme that we have developed specifically for enriching bio-events with meta-knowledge information. Our annotation scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the metaknowledge expressed in the text. To our knowledge, our scheme is unique within the field with regards to the diversity of metaknowledge aspects annotated for each event, whilst the evaluation results have confirmed its feasibility and soundness.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
The work described in this paper has been funded by the Biotechnology and Biological Sciences Research Council through grant numbers BBS/B/13640, BB/F006039/1 (ONDEX)
|
2010
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lafourcade-boitet-2002-unl
|
http://www.lrec-conf.org/proceedings/lrec2002/pdf/354.pdf
|
UNL Lexical Selection with Conceptual Vectors
|
When deconverting a UNL graph into some natural language LG, we often encounter lexical items (called UWs) made of an English headword and formalized semantic restrictions, such as "look for (icl>do, agt>person)", which are not yet connected t o lemmas, so that is it necessary to find a "nearest" UW in the UNL-LG dictionary, such as "look for (icl>action, agt>human, obj>thing)". Then, this UW may be connected to several lemmas of LG. In order to solve these problems of incompleteness and polysemy, we are applying a method based on the computation of "conceptual vectors", previously used successfully in the context of thematic indexing of French and English documents.
| false |
[] |
[] | null | null | null | null |
2002
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chinchor-marsh-1998-appendix
|
https://aclanthology.org/M98-1027
|
Appendix D: MUC-7 Information Extraction Task Definition (version 5.1)
|
Brief Definition of Information Extraction Task Information extraction in the sense of the Message Understanding Conferences has been traditionally defined as the extraction of information from a text in the form of text strings and processed text strings which are placed into slots labeled to indicate the kind of information that can fill them. So, for example, a slot labeled NAME would contain a name string taken directly out of the text or modified in some well-defined way, such as by deleting all but the person's surname. Another example could be a slot called WEAPON which requires as a fill one of a set of designated classes of weapons based on some categorization of the weapons that has meaning in the events of import such as GUN or BOMB in a terrorist event. The input to information extraction is a set of texts, usually unclassified newswire articles, and the output is a set of filled slots. The set of filled slots may represent an entity with its attributes, a relationship between two or more entities, or an event with various entities playing roles and/or being in certain relationships. Entities with their attributes are extracted in the Template Element task; relationships between two or more entities are extracted in the Template Relation task; and events with various entities playing roles and/or being in certain relationships are extracted in the Scenario Template task. 1.
| false |
[] |
[] | null | null | null | null |
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
choi-etal-2008-overcome
|
https://aclanthology.org/Y08-1015
|
How to Overcome the Domain Barriers in Pattern-Based Machine Translation System
|
One of difficult issues in pattern-based machine translation system is maybe to find how to overcome the domain difference in adapting a system from one domain to other domain. This paper describes how we have resolved such barriers among domains as default target word of any domain, domain-specific patterns, and domain adaptation of engine modules in pattern-based machine translation system, especially English-Korean pattern-based machine translation system. For this, we will discuss two types of customization methods which mean a method adapting an existing system to new domain. One is the pure customization method introduced for patent machine translation system in 2006 and another is the upgraded customization method applied to scientific paper machine translation system in 2007. By introducing an upgraded customization method, we could implement a practical machine translation system for scientific paper translation within 8 months, in comparison with the patent machine translation system that was completed even in 24 months by the pure customization method. The translation accuracy of scientific paper machine translation system also rose 77.25% to 81.10% in spite of short term of 8 months.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hassert-etal-2021-ud
|
https://aclanthology.org/2021.udw-1.5
|
UD on Software Requirements: Application and Challenges
|
Technical documents present distinct challenges when used in natural language processing tasks such as part-of-speech tagging or syntactic parsing. This is mainly due to the nature of their content, which may differ greatly from more studied texts like news articles, encyclopedic extracts or social media entries. This work contributes an English corpus composed of software requirement texts annotated in Universal Dependencies (UD) to study the differences, challenges and issues encountered on these documents when following the UD guidelines. Different structural and linguistic phenomena are studied in the light of their impact on manual and automatic dependency annotation. To better cope with texts of this nature, some modifications and features are proposed in order to enrich the existing UD guidelines to better cover technical texts. The proposed corpus is compared to other existing corpora to show the structural complexity of the texts as well as the challenge it presents to recent processing methods. This contribution is the first software requirement corpus annotated with UD relations.
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sakakini-etal-2019-equipping
|
https://aclanthology.org/W19-4448
|
Equipping Educational Applications with Domain Knowledge
|
One of the challenges of building natural language processing (NLP) applications for education is finding a large domain-specific corpus for the subject of interest (e.g., history or science). To address this challenge, we propose a tool, Dexter, that extracts a subjectspecific corpus from a heterogeneous corpus, such as Wikipedia, by relying on a small seed corpus and distributed document representations. We empirically show the impact of the generated corpus on language modeling, estimating word embeddings, and consequently, distractor generation, resulting in a better performance than while using a general domain corpus, a heuristically constructed domainspecific corpus, and a corpus generated by a popular system: BootCaT.
| true |
[] |
[] |
Quality Education
| null | null |
This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) -a research collaboration as part of the IBM AI Horizons Network.
|
2019
| false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false |
ulinski-etal-2019-spatialnet
|
https://aclanthology.org/W19-1607
|
SpatialNet: A Declarative Resource for Spatial Relations
|
This paper introduces SpatialNet, a novel resource which links linguistic expressions to actual spatial configurations. SpatialNet is based on FrameNet (Ruppenhofer et al., 2016) and VigNet (Coyne et al., 2011), two resources which use frame semantics to encode lexical meaning. SpatialNet uses a deep semantic representation of spatial relations to provide a formal description of how a language expresses spatial information. This formal representation of the lexical semantics of spatial language also provides a consistent way to represent spatial meaning across multiple languages. In this paper, we describe the structure of SpatialNet, with examples from English and German. We also show how SpatialNet can be combined with other existing NLP tools to create a text-to-scene system for a language.
| false |
[] |
[] | null | null | null | null |
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
van-der-wees-etal-2015-whats
|
https://aclanthology.org/P15-2092
|
What's in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation
|
Domain adaptation is an active field of research in statistical machine translation (SMT), but so far most work has ignored the distinction between the topic and genre of documents. In this paper we quantify and disentangle the impact of genre and topic differences on translation quality by introducing a new data set that has controlled topic and genre distributions. In addition, we perform a detailed analysis showing that differences across topics only explain to a limited degree translation performance differences across genres, and that genre-specific errors are more attributable to model coverage than to suboptimal scoring of translation candidates.
| false |
[] |
[] | null | null | null |
This research was funded in part by the Netherlands Organization for Scientific Research (NWO) under project number 639.022.213. We thank Rachel Cotterill, Nigel Dewdney, and the anonymous reviewers for their valuable comments.
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
suresh-ong-2021-negatives
|
https://aclanthology.org/2021.emnlp-main.359
|
Not All Negatives are Equal: Label-Aware Contrastive Loss for Fine-grained Text Classification
|
Fine-grained classification involves dealing with datasets with larger number of classes with subtle differences between them. Guiding the model to focus on differentiating dimensions between these commonly confusable classes is key to improving performance on fine-grained tasks. In this work, we analyse the contrastive fine-tuning of pre-trained language models on two fine-grained text classification tasks, emotion classification and sentiment analysis. We adaptively embed class relationships into a contrastive objective function to help differently weigh the positives and negatives, and in particular, weighting closely confusable negatives more than less similar negative examples. We find that Label-aware Contrastive Loss outperforms previous contrastive methods, in the presence of larger number and/or more confusable classes, and helps models to produce output distributions that are more differentiated.
| false |
[] |
[] | null | null | null |
This research is supported by the National Research Foundation, Singapore under its AI Singapore Program (AISG Award No: AISG2-RP-2020-016).
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gollapalli-etal-2020-ester
|
https://aclanthology.org/2020.findings-emnlp.93
|
ESTeR: Combining Word Co-occurrences and Word Associations for Unsupervised Emotion Detection
|
Accurate detection of emotions in usergenerated text was shown to have several applications for e-commerce, public well-being, and disaster management. Currently, the stateof-the-art performance for emotion detection in text is obtained using complex, deep learning models trained on domain-specific, labeled data. In this paper, we propose Emotion-Sensitive TextRank (ESTeR), an unsupervised model for identifying emotions using a novel similarity function based on random walks on graphs. Our model combines large-scale word co-occurrence information with wordassociations from lexicons avoiding not only the dependence on labeled datasets, but also an explicit mapping of words to latent spaces used in emotion-enriched word embeddings. Our similarity function can also be computed efficiently. We study a diverse range of datasets including recent tweets related to COVID-19 to illustrate the superior performance of our model and report insights on public emotions during the ongoing pandemic.
| false |
[] |
[] | null | null | null |
This research/project is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG-GC-2019-001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hu-etal-2018-shot
|
https://aclanthology.org/C18-1041
|
Few-Shot Charge Prediction with Discriminative Legal Attributes
|
Automatic charge prediction aims to predict the final charges according to the fact descriptions in criminal cases and plays a crucial role in legal assistant systems. Existing works on charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with limited cases. Moreover, these exist many confusing charge pairs, whose fact descriptions are fairly similar to each other. To address these issues, we introduce several discriminative attributes of charges as the internal mapping between fact descriptions and charges. These attributes provide additional information for few-shot charges, as well as effective signals for distinguishing confusing charges. More specifically, we propose an attribute-attentive charge prediction model to infer the attributes and charges simultaneously. Experimental results on real-work datasets demonstrate that our proposed model achieves significant and consistent improvements than other state-of-the-art baselines. Specifically, our model outperforms other baselines by more than 50% in the few-shot scenario. Our codes and datasets can be obtained from https://github.com/thunlp/attribute_charge.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null |
We thank all the anonymous reviewers for their insightful comments. This work is supported by the National Natural Science Foundation of China (NSFC No. 61661146007, 61572273) and Tsinghua University Initiative Scientific Research Program (20151080406). This research is part of the NExT++ project, supported by the National Research Foundation, Prime Ministers Office, Singapore under its IRC@Singapore Funding Initiative.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
nouvel-etal-2012-coupling
|
https://aclanthology.org/W12-0510
|
Coupling Knowledge-Based and Data-Driven Systems for Named Entity Recognition
|
Within Information Extraction tasks, Named Entity Recognition has received much attention over latest decades. From symbolic / knowledge-based to data-driven / machine-learning systems, many approaches have been experimented. Our work may be viewed as an attempt to bridge the gap from the data-driven perspective back to the knowledge-based one. We use a knowledge-based system, based on manually implemented transducers, that reaches satisfactory performances. It has the undisputable advantage of being modular. However, such a hand-crafted system requires substantial efforts to cope with dedicated tasks. In this context, we implemented a pattern extractor that extracts symbolic knowledge, using hierarchical sequential pattern mining over annotated corpora. To assess the accuracy of mined patterns, we designed a module that recognizes Named Entities in texts by determining their most probable boundaries. Instead of considering Named Entity Recognition as a labeling task, it relies on complex context-aware features provided by lower-level systems and considers the tagging task as a markovian process. Using thos systems, coupling knowledge-based system with extracted patterns is straightforward and leads to a competitive hybrid NE-tagger. We report experiments using this system and compare it to other hybridization strategies along with a baseline CRF model.
| false |
[] |
[] | null | null | null | null |
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
su-etal-2021-dependency
|
https://aclanthology.org/2021.conll-1.2
|
Dependency Induction Through the Lens of Visual Perception
|
Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size. 1
| false |
[] |
[] | null | null | null |
This work was supported in part by the DARPA GAILA project (award HR00111990063). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We would also like to thank the reviewers for their thoughtful comments.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
inui-1996-internet
|
https://aclanthology.org/C96-2175
|
The Internet a ``natural'' channel for language learning
|
The network as a motivational source for using a foreign language. Electronic networks can be useful in many ways for language learners. First of all, network facilities (e-mail, news, WWW home-pages) minimize not only the boundaries of time and space, but they also help to break communication bar-tiers. They are a wonderful tool for USING a foreign language. E-mail, for example, can be used not only for interaction between teachers and students, but also for interaction among students (collaborative learning). Students can even ask for help from friends or "ex-perts" living elsewhere, on the other side of the globe. There have been quite a few attempts to introduce these new tools into the classroom. For example, there are several well established mailing lists between Japanese and foreign schools. This allows Japanese kids to practice, let's say English, by exchanging messages with students from "abroad", chatting about their favorite topics like music, sport or any other hobby. Obviously, this kind of communication is meaningful for the student, since s/he can talk about things s/he is concerned with. What role then can CALL system play in this new setting? Rather than trying to play the role people are very good at (answering on the fly questions on any topic, common sense reasoning, etc.), CALL system should assist people by providing the learner with information humans are generally fairly poor at. One way to help the user is by providing him with information (databases) he is looking for. For example, all language learners are concerned with lexicons. Having fabulous browsing tools, computers have a great advantage over traditional dictionaries. Also, people are not very good in explaining the contexts in which a word may be used, or in explaining the difference between two words. Last, but not least, existing NLP technology, such as parsing or machine translation, could be incorporated into the development of 'intel-ligent dictionaries'. However, before doing so, we have to consider several basic issues : what information is useful, that is, what information should be provided to the learner, when and how? For example, rather than killing the user by an information overflow,-like these long list of translations that most electronic dictionaries provide, lists in which the user has to dig deeply in order to find the relevant word,one could parametrize the,level of detail, scope and grain size of translations for a given text or text fragment. In sum, there should be a balance between the information provided by the system and the user's competence. Following this line of reasoning we have started to work on a user friendly interface for a bilingual lexicon (English-Japanese). Two features of our prototype are worth mentioning: (a) the tool is implemented as a WWW
| true |
[] |
[] |
Quality Education
| null | null | null |
1996
| false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false |
strzalkowskl-1990-invert
|
https://aclanthology.org/C90-2060
|
How to Invert a Natural Language Parser Into an Efficient Generator: An Algorithm for Logic Grammars
|
The use of a single grammar in natural language parsing and generation is most desirable for variety of reasons including efficiency, perspicuity, integrity, robusthess, and a certain ,amount of elegance. In this paper we present an algorithm for automated inversion of a PROLOG-coded unification parser into an efficient unification generator, using the collections of minimal sets of essential arguments (MSEA) for predicates. The algorithm is also applicable to more abstract systems for writing logic grammars, such as DCG.
| false |
[] |
[] | null | null | null | null |
1990
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2010-semantic
|
https://aclanthology.org/C10-1079
|
Semantic Role Labeling for News Tweets
|
News tweets that report what is happening have become an important real-time information source. We raise the problem of Semantic Role Labeling (SRL) for news tweets, which is meaningful for fine grained information extraction and retrieval. We present a self-supervised learning approach to train a domain specific SRL system to resolve the problem. A large volume of training data is automatically labeled, by leveraging the existing SRL system on news domain and content similarity between news and news tweets. On a human annotated test set, our system achieves state-of-the-art performance, outperforming the SRL system trained on news.
| false |
[] |
[] | null | null | null | null |
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chen-etal-2006-reordering
|
https://aclanthology.org/2006.iwslt-papers.4
|
Reordering rules for phrase-based statistical machine translation
|
This paper proposes the use of rules automatically extracted from word aligned training data to model word reordering phenomena in phrase-based statistical machine translation. Scores computed from matching rules are used as additional feature functions in the rescoring stage of the automatic translation process from various languages to English, in the ambit of a popular traveling domain task. Rules are defined either on Part-of-Speech or words. Part-of-Speech rules are extracted from and applied to Chinese, while lexicalized rules are extracted from and applied to Chinese, Japanese and Arabic. Both Part-of-Speech and lexicalized rules yield an absolute improvement of the BLEU score of 0.4-0.9 points without affecting the NIST score, on the Chinese-to-English translation task. On other language pairs which differ a lot in the word order, the use of lexicalized rules allows to observe significant improvements as well.
| false |
[] |
[] | null | null | null |
This work has been funded by the European Union under the integrated project TC-STAR -Technology and Corpora for Speech-to-Speech Translation -(IST-2002-FP6-506738, http://www.tc-star.org).
|
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
padro-etal-2010-freeling
|
http://www.lrec-conf.org/proceedings/lrec2010/pdf/14_Paper.pdf
|
FreeLing 2.1: Five Years of Open-source Language Processing Tools
|
FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial three languages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources.
| false |
[] |
[] | null | null | null |
This work has been partially funded by the Spanish Government via the KNOW2 (TIN2009-14715-C04-03/04) project.
|
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
attnas-etal-2005-integration
|
https://aclanthology.org/2005.mtsummit-papers.28
|
Integration of SYSTRAN MT Systems in an Open Workflow
| null | false |
[] |
[] | null | null | null | null |
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
trisedya-etal-2018-gtr
|
https://aclanthology.org/P18-1151
|
GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data
|
A knowledge base is a large repository of facts that are mainly represented as RDF triples, each of which consists of a subject, a predicate (relationship), and an object. The RDF triple representation offers a simple interface for applications to access the facts. However, this representation is not in a natural language form, which is difficult for humans to understand. We address this problem by proposing a system to translate a set of RDF triples into natural sentences based on an encoder-decoder framework. To preserve as much information from RDF triples as possible, we propose a novel graph-based triple encoder. The proposed encoder encodes not only the elements of the triples but also the relationships both within a triple and between the triples. Experimental results show that the proposed encoder achieves a consistent improvement over the baseline models by up to 17.6%, 6.0%, and 16.4% in three common metrics BLEU, METEOR, and TER, respectively.
| false |
[] |
[] | null | null | null |
Bayu Distiawan Trisedya is supported by the Indonesian Endowment Fund for Education (LPDP). This work is supported by Australian Research Council (ARC) Discovery Project DP180102050 and Future Fellowships Project FT120100832, and Google Faculty Research Award. This work is partly done while Jianzhong Qi is visiting the University of New South Wales. Wei Wang was partially supported by D2DCRC DC25002, DC25003, ARC DP 170103710 and 180103411.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
messina-etal-2021-aimh
|
https://aclanthology.org/2021.semeval-1.140
|
AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models
|
This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.
| false |
[] |
[] | null | null | null |
This work was partially supported by "Intelligenza Artificiale per il Monitoraggio Visuale dei Siti Culturali" (AI4CHSites) CNR4C program, CUP B15J19001040004, by the AI4EU project, funded by the EC (H2020 -Contract n. 825619), and AI4Media under GA 951911.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
balouchzahi-etal-2021-mucs
|
https://aclanthology.org/2021.dravidianlangtech-1.47
|
MUCS@DravidianLangTech-EACL2021:COOLI-Code-Mixing Offensive Language Identification
|
This paper describes the models submitted by the team MUCS for Offensive Language Identification in Dravidian Languages-EACL 2021 shared task that aims at identifying and classifying code-mixed texts of three language pairs namely, Kannada-English (Kn-En), Malayalam-English (Ma-En), and Tamil-English (Ta-En) into six predefined categories (5 categories in Ma-En language pair). Two models, namely, COOLI-Ensemble and COOLI-Keras are trained with the char sequences extracted from the sentences combined with words in the sentences as features. Out of the two proposed models, COOLI-Ensemble model (best among our models) obtained first rank for Ma-En language pair with 0.97 weighted F1-score and fourth and sixth ranks with 0.75 and 0.69 weighted F1-score for Ta-En and Kn-En language pairs respectively.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
zhang-etal-2022-multilingual
|
https://aclanthology.org/2022.acl-long.287
|
Multilingual Document-Level Translation Enables Zero-Shot Transfer From Sentences to Documents
|
Document-level neural machine translation (DocNMT) achieves coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable via multilingual modeling. We focus on the scenario of zero-shot transfer from teacher languages with document level data to student languages with no documents but sentence level data, and for the first time treat document-level translation as a transfer learning problem. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on the transfer: the number of teacher languages with document level data, the balance between document and sentence level data at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data balance both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition, where multilingual DocNMT delivers decent performance with either backtranslated or genuine document pairs.
| false |
[] |
[] | null | null | null |
We thank the reviewers for their insightful comments. We want to thank Macduff Hughes and Wolfgang Macherey for their valuable feedback. We would also like to thank the Google Translate team for their constructive discussions and comments.
|
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sanchis-trilles-etal-2011-bilingual
|
https://aclanthology.org/2011.eamt-1.35
|
Bilingual segmentation for phrasetable pruning in Statistical Machine Translation
|
Statistical machine translation systems have greatly improved in the last years. However, this boost in performance usually comes at a high computational cost, yielding systems that are often not suitable for integration in hand-held or real-time devices. We describe a novel technique for reducing such cost by performing a Viterbi-style selection of the parameters of the translation model. We present results with finite state transducers and phrasebased models showing a 98% reduction of the number of parameters and a 15-fold increase in translation speed without any significant loss in translation quality.
| false |
[] |
[] | null | null | null |
This paper is based upon work supported by the EC (FEDER/FSE) and the Spanish MICINN under projects MIPRCV "Consolider Ingenio 2010" (CSD2007-00018) and iTrans2 (TIN2009-14511). Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project, by the Generalitat Valenciana under grant Prometeo/2009/014, and by the UPV under grant 20091027.The authors would also like to thank the anonymous reviewers for their constructive and detailed comments.
|
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bonnema-etal-1997-dop
|
https://aclanthology.org/P97-1021
|
A DOP Model for Semantic Interpretation
|
In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic analysis, using corpora with syntactic annotations such as the Penn Tree-bank. If a corpus with semantically annotated sentences is used, the same approach can also generate the most probable semantic interpretation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic interpretation algorithm was tested on two semantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in semantic accuracy if larger corpus-fragments are taken into consideration.
| false |
[] |
[] | null | null | null | null |
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
stromback-1994-achieving
|
https://aclanthology.org/C94-2135
|
Achieving Flexibility in Unification Formalisms
|
We argue that flexibility is an important property for unification-based formalisms. By flexibility we mean the ability for the user to modify and extend the formalism according to the needs of his problem. The paper discusses some properties necessary to achieve a flexible formalism and presents the FLUF formalism as a realization of these ideas.
| false |
[] |
[] | null | null | null |
This work has been supported by the Swedish Research Council for Engineering Sciences. I am also grateful to Lars Ahrenberg for guidance on this work.
|
1994
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
fox-2005-dependency
|
https://aclanthology.org/P05-2016
|
Dependency-Based Statistical Machine Translation
|
We present a Czech-English statistical machine translation system which performs tree-to-tree translation of dependency structures. The only bilingual resource required is a sentence-aligned parallel corpus. All other resources are monolingual. We also refer to an evaluation method and plan to compare our system's output with a benchmark system.
| false |
[] |
[] | null | null | null |
This work was supported in part by NSF grant IGERT-9870676. We would like to thank Jan Hajič, MartinČmejrek, Jan Cuřín for all of their assistance.
|
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mason-2013-domain
|
https://aclanthology.org/N13-2010
|
Domain-Independent Captioning of Domain-Specific Images
|
Automatically describing visual content is an extremely difficult task, with hard AI problems in Computer Vision (CV) and Natural Language Processing (NLP) at its core. Previous work relies on supervised visual recognition systems to determine the content of images. These systems require massive amounts of hand-labeled data for training, so the number of visual classes that can be recognized is typically very small. We argue that these approaches place unrealistic limits on the kinds of images that can be captioned, and are unlikely to produce captions which reflect human interpretations. We present a framework for image caption generation that does not rely on visual recognition systems, which we have implemented on a dataset of online shopping images and product descriptions. We propose future work to improve this method, and extensions for other domains of images and natural text.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
petrick-1981-field
|
https://aclanthology.org/P81-1009
|
Field Testing the Transformational Question Answering (TQA) System
|
The Transformatlonal question Answering (TqA) system was developed over a period of time beginning in the early part of the last decade and continuing to the present. Its syntactic component is a transformational grammar parser [1, 2, ~] , and its semantic Gomponqnt is a Knuth attribute grammor [~, 5] . The combination of these components providQs sufficiQnt generality, conveniQnga, and efficiency to implement a broad range of linguistic models; in addition to a wide spectrum of transformational grammars, Gilder-type phrase structure grammar [6] and lexigal functional grammar [7] systems appear to be cases in point, for example. The Particular grammar Nhich was, in fact, developed, however, was closest tO those of the genQrative semantics variety of trsnsformationel grammar; both the underlying structures assigned to sQntences and the transformations employed to effect that assignmQnt traced their origins to the generative semantics model. was the one employed in the field testing to be dQscribed. In a more recent version of the system, however, it has been replacod by a translation of logical forms, first to equivalent logical forms in a set domain relational calculus and then to appropriate expressions in the 5el language, SystQm RIs high level query language.
The first data base to which the system was applied was one concerning business statistics such as the sales, earnings, number of employees, etc. of 60 large companies over a five-year
| false |
[] |
[] | null | null | null | null |
1981
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
smith-2009-copyright
|
https://aclanthology.org/2009.tc-1.13
|
Copyright issues in translation memory ownership
|
In the last two decades terminological databases and translation memory (TM) software have become ubiquitous in the professional translation community, to such an extent that translating without these tools has now become almost unthinkable in most technical fields. Until recently, however, the question of who actually owned these tools was not considered relevant. Most users installed the software on their desktop computers and built their termbanks and TMs without ever considering the possibility that they could be extracted and sent elsewhere, or that the content of their databases might in fact belong wholly or partly to someone else. With the advent of high-speed data transmission over computer networks, however, these resources have been released from the confines of individual PCs and have begun circulating around the Internet, causing a major shift in the manner in which they are perceived and uncovering new commercial possibilities for their exploitation.
The first outlet for TMs as tradable products was set up in 2007 as a joint initiative between Multilingual Computing, Inc. and International Writers' Group, with the name of TM Marketplace (www.tmmarketplace.com). The creators of TM Marketplace use the term "TM assets" to refer to the products traded on this site. As a result, translation memory files can now be bought, sold and licensed as individual assets. Termbanks, already commercialised in the form of word-lists and specialised glossaries, are also affected by the Internet revolution because as well as being easy to transfer electronically, they are particularly vulnerable to illegal copying.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chakravarthy-etal-2008-learning
|
https://aclanthology.org/I08-2118
|
Learning Decision Lists with Known Rules for Text Mining
|
Many real-world systems for handling unstructured text data are rule-based. Examples of such systems are named entity annotators, information extraction systems, and text classifiers. In each of these applications, ordering rules into a decision list is an important issue. In this paper, we assume that a set of rules is given and study the problem (MaxDL) of ordering them into an optimal decision list with respect to a given training set. We formalize this problem and show that it is NP-Hard and cannot be approximated within any reasonable factors. We then propose some heuristic algorithms and conduct exhaustive experiments to evaluate their performance. In our experiments we also observe performance improvement over an existing decision list learning algorithm, by merely reordering the rules output by it.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nivre-1994-pragmatics
|
https://aclanthology.org/W93-0414
|
Pragmatics Through Context Management
|
Pragmatically based dialogue management requires flexible and efficient representation of contextual information. The approach described in this paper uses logical knowledge bases to represent contextual information and special abductive reasoning tools to manage these knowledge bases. One of the advantages of such a reasoning based approach to computational dialogue pragmatics is that the same rules, stated declaratively, can be used both in analysis and generation.
| false |
[] |
[] | null | null | null | null |
1994
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rajagopal-etal-2019-domain
|
https://aclanthology.org/W19-5009
|
Domain Adaptation of SRL Systems for Biological Processes
|
Domain adaptation remains one of the most challenging aspects in the widespread use of Semantic Role Labeling (SRL) systems. Current state-of-the-art methods are typically trained on large-scale datasets, but their performances do not directly transfer to lowresource domain-specific settings. In this paper, we propose two approaches for domain adaptation in biological domain that involve pre-training LSTM-CRF based on existing large-scale datasets and adapting it for a low-resource corpus of biological processes. Our first approach defines a mapping between the source labels and the target labels, and the other approach modifies the final CRF layer in sequence-labeling neural network architecture. We perform our experiments on Pro-cessBank (Berant et al., 2014) dataset which contains less than 200 paragraphs on biological processes. We improve over the previous state-of-the-art system on this dataset by 21 F1 points. We also show that, by incorporating event-event relationship in ProcessBank, we are able to achieve an additional 2.6 F1 gain, giving us possible insights into how to improve SRL systems for biological process using richer annotations.
| true |
[] |
[] |
Good Health and Well-Being
| null | null | null |
2019
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
xing-etal-2020-tasty
|
https://aclanthology.org/2020.emnlp-main.292
|
Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based Sentiment Analysis
|
Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text. However, existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects. To solve this problem, we develop a simple but effective approach to enrich ABSA test sets. Specifically, we generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment. Based on the SemEval 2014 dataset, we construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and desired sentiment on all aspects by human evaluation. Using ARTS, we analyze the robustness of nine ABSA models, and observe, surprisingly, that their accuracy drops by up to 69.73%. We explore several ways to improve aspect robustness, and find that adversarial training can improve models' performance on ARTS by up to 32.85%. 1
| false |
[] |
[] | null | null | null |
We appreciate Professor Rada Mihalcea for her insights that helped us plan this research, Pengfei Liu for valuable suggestions on writing, and Yuchun Dai for helping to code some functions in our annotation tool. We also want to convey special thanks to Mahi Shafiullah and Osmond Wang for brilliant suggestions on the wording of the title. This work was partially funded by China National Key R&D Program (No. 2018YFC0831105, 2018YFB1005104, 2017YFB1002104), National Natural Science Foundation of China (No. 61751201, 61976056, 61532011, 62076069), Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), Science and Technology Commission of Shanghai Municipality Grant (No.18DZ1201000, 17JC1420200).
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
azadi-khadivi-2015-improved
|
https://aclanthology.org/2015.mtsummit-papers.25
|
Improved search strategy for interactive predictions in computer-assisted translation
|
The statistical machine translation outputs are not error-free and in a high quality yet. So in the cases that we need high quality translations we definitely need the human intervention. An interactive-predictive machine translation is a framework, which enables the collaboration of the human and the translation system. Here, we address the problem of searching the best suffix to propose to the user in the phrase-based interactive prediction scenario. By adding the jump operation to the common edit distance based search, we try to overcome the lack of some of the reorderings in the search graph which might be desired by the user. The experiments results shows that this method improves the base method by 1.35% in KSMR 2 , and if we combine the edit error in the proposed method with the translation scores given by the statistical models to select the offered suffix, we could gain the KSMR improvement of about 1.63% compared to the base search method.
| false |
[] |
[] | null | null | null | null |
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
dong-etal-2013-difficulties
|
https://aclanthology.org/Y13-1012
|
Difficulties in Perception and Pronunciation of Mandarin Chinese Disyllabic Word Tone Acquisition: A Study of Some Japanese University Students
|
Tonal errors pose a serious problem to Mandarin Chinese learners, making them stumble in their communication. The purpose of this paper is to investigate beginner level Japanese students' difficulties in the perception and pronunciation of disyllabic words, particularly to find out which combinations of tones these errors mostly occur in. As a result, the errors made by the 10 subjects were mostly found in tonal patterns 1-3, 2-1, 2-3, 3-2 and 4-3 in both perception and pronunciation. Furthermore, by comparing the ratio of tonal errors of initial to final syllables, we can tell that the initial syllables appear more difficult than the final syllables in perception, but in pronunciation this tendency is not found. Moreover, there seems to be some connection between learners' perception and pronunciation in their acquisition process.
| true |
[] |
[] |
Quality Education
| null | null | null |
2013
| false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false |
spreyer-kuhn-2009-data
|
https://aclanthology.org/W09-1104
|
Data-Driven Dependency Parsing of New Languages Using Incomplete and Noisy Training Data
|
We present a simple but very effective approach to identifying high-quality data in noisy data sets for structured problems like parsing, by greedily exploiting partial structures. We analyze our approach in an annotation projection framework for dependency trees, and show how dependency parsers from two different paradigms (graph-based and transition-based) can be trained on the resulting tree fragments. We train parsers for Dutch to evaluate our method and to investigate to which degree graph-based and transitionbased parsers can benefit from incomplete training data. We find that partial correspondence projection gives rise to parsers that outperform parsers trained on aggressively filtered data sets, and achieve unlabeled attachment scores that are only 5% behind the average UAS for Dutch in the CoNLL-X Shared Task on supervised parsing (Buchholz and Marsi, 2006).
| false |
[] |
[] | null | null | null |
The research reported in this paper has been supported by the German Research Foundation DFG as part of SFB 632 "Information structure" (project D4; PI: Kuhn).
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
parmentier-etal-2008-tulipa
|
https://aclanthology.org/W08-2316
|
TuLiPA: A syntax-semantics parsing environment for mildly context-sensitive formalisms
|
In this paper we present a parsing architecture that allows processing of different mildly context-sensitive formalisms, in particular Tree-Adjoining Grammar (TAG), Multi-Component Tree-Adjoining Grammar with Tree Tuples (TT-MCTAG) and simple Range Concatenation Grammar (RCG). Furthermore, for tree-based grammars, the parser computes not only syntactic analyses but also the corresponding semantic representations.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
faisal-etal-2021-sd-qa
|
https://aclanthology.org/2021.findings-emnlp.281
|
SD-QA: Spoken Dialectal Question Answering for the Real World
|
Question answering (QA) systems are now available through numerous commercial applications for a wide variety of domains, serving millions of users that interact with them via speech interfaces. However, current benchmarks in QA research do not account for the errors that speech recognition models might introduce, nor do they consider the language variations (dialects) of the users. To address this gap, we augment an existing QA dataset to construct a multi-dialect, spoken QA benchmark on five languages (Arabic, Bengali, English, Kiswahili, Korean) with more than 68k audio prompts in 24 dialects from 255 speakers. We provide baseline results showcasing the real-world performance of QA systems and analyze the effect of language variety and other sensitive speaker attributes on downstream performance. Last, we study the fairness of the ASR and QA models with respect to the underlying user populations. 1
| false |
[] |
[] | null | null | null |
This work is generously supported by NSF Awards 2040926 and 2125466. The dataset creation was supported though a Google Award for Inclusion Research. We also want to thank Jacob Eisenstein, Manaal Faruqui, and Jon Clark for helpful discussions on question answering and data collection. The authors are grateful to Kathleen Siminyu for her help with collecting Kiswahili and Kenyan English speech samples, to Sylwia Tur and Moana Wilkinson from Appen for help with the rest of the data collection and quality assurance process, and to all the annotators who participated in the creation of SD-QA. We also thank Abdulrahman Alshammari for his help with analyzing and correcting the Arabic data.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rocha-chaves-machado-rino-2008-mitkov
|
https://aclanthology.org/2008.jeptalnrecital-long.1
|
The Mitkov algorithm for anaphora resolution in Portuguese
|
This paper reports on the use of the Mitkov´s algorithm for pronoun resolution in texts written in Brazilian Portuguese. Third person pronouns are the only ones focused upon here, with noun phrases as antecedents. A system for anaphora resolution in Brazilian Portuguese texts was built that embeds most of the Mitkov's features. Some of his resolution factors were directly incorporated into the system; others had to be slightly modified for language adequacy. The resulting approach was intrinsically evaluated on hand-annotated corpora. It was also compared to Lappin & Leass's algorithm for pronoun resolution, also customized to Portuguese. Success rate was the evaluation measure used in both experiments. The results of both evaluations are discussed here.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ahia-etal-2021-low-resource
|
https://aclanthology.org/2021.findings-emnlp.282
|
The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation
|
A "bigger is better" explosion in the number of parameters in deep neural networks has made it increasingly challenging to make stateof-the-art networks accessible in computerestricted environments. Compression techniques have taken on renewed importance as a way to bridge the gap. However, evaluation of the trade-offs incurred by popular compression techniques has been centered on high-resource datasets. In this work, we instead consider the impact of compression in a data-limited regime. We introduce the term low-resource double bind to refer to the co-occurrence of data limitations and compute resource constraints. This is a common setting for NLP for low-resource languages, yet the trade-offs in performance are poorly studied. Our work offers surprising insights into the relationship between capacity and generalization in data-limited regimes for the task of machine translation. Our experiments on magnitude pruning for translations from English into Yoruba, Hausa, Igbo and German show that in low-resource regimes, sparsity preserves performance on frequent sentences but has a disparate impact on infrequent ones. However, it improves robustness to out-of-distribution shifts, especially for datasets that are very distinct from the training distribution. Our findings suggest that sparsity can play a beneficial role at curbing memorization of low frequency attributes, and therefore offers a promising solution to the low-resource double bind.
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
burtenshaw-kestemont-2021-uantwerp
|
https://aclanthology.org/2021.semeval-1.121
|
UAntwerp at SemEval-2021 Task 5: Spans are Spans, stacking a binary word level approach to toxic span detection
|
This paper describes the system developed by the Antwerp Centre for Digital humanities and literary Criticism [UAntwerp] for toxic span detection. We used a stacked generalisation ensemble of five component models, with two distinct interpretations of the task. Two models attempted to predict binary word toxicity based on ngram sequences, whilst 3 categorical span based models were trained to predict toxic token labels based on complete sequence tokens. The five models' predictions were ensembled within an LSTM model. As well as describing the system, we perform error analysis to explore model performance in relation to textual features. The system described in this paper scored 0.6755 and ranked 26 th .
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
zens-ney-2005-word
|
https://aclanthology.org/W05-0834
|
Word Graphs for Statistical Machine Translation
|
Word graphs have various applications in the field of machine translation. Therefore it is important for machine translation systems to produce compact word graphs of high quality. We will describe the generation of word graphs for state of the art phrase-based statistical machine translation. We will use these word graph to provide an analysis of the search process. We will evaluate the quality of the word graphs using the well-known graph word error rate. Additionally, we introduce the two novel graph-to-string criteria: the position-independent graph word error rate and the graph BLEU score. Experimental results are presented for two Chinese-English tasks: the small IWSLT task and the NIST large data track task. For both tasks, we achieve significant reductions of the graph error rate already with compact word graphs.
| false |
[] |
[] | null | null | null |
This work was partly funded by the European Union under the integrated project TC-Star (Technology and Corpora for Speech to Speech Translation, IST-2002-FP6-506738, http://www.tc-star.org).
|
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
boleda-etal-2013-intensionality
|
https://aclanthology.org/W13-0104
|
Intensionality was only alleged: On adjective-noun composition in distributional semantics
|
Distributional semantics has very successfully modeled semantic phenomena at the word level, and recently interest has grown in extending it to capture the meaning of phrases via semantic composition. We present experiments in adjective-noun composition which (1) show that adjectival modification can be successfully modeled with distributional semantics, (2) show that composition models inspired by the semantics of higher-order predication fare better than those that perform simple feature union or intersection, (3) contrary to what the theoretical literature might lead one to expect, do not yield a distinction between intensional and non-intensional modification, and (4) suggest that head noun polysemy and whether the adjective corresponds to a typical attribute of the noun are relevant factors in the distributional representation of adjective phrases.
| false |
[] |
[] | null | null | null |
We thank Miquel Cornudella for help in constructing the dataset. We acknowledge the support of Spanish MICINN grant FFI2010-09464-E (McNally, Boleda), the ICREA Foundation (McNally), Catalan AGAUR grant 2010BP-A00070, MICINN grant TIN2009-14715-C04-04, EU grant PASCAL2, FP7-ICT-216886, the DARPA DEFT program under AFRL grant FA8750-13-2-0026 (Boleda) and the ERC under the 2011 Starting Independent Research Grant 283554 to the COMPOSES project (Baroni, Pham). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL or the US government.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
murveit-weintraub-1990-real
|
https://aclanthology.org/H90-1097
|
Real-Time Speech Recognition Systems
|
SRI and U.C. Berkeley have begun a cooperative effort to develop a new architecture for real-time implementation of spoken language systems (SLS). Our goal is to develop fast speech recognition algorithms, and supporting hardware capable of recognizing continuous speech from a bigram or trigram based 20,000 word vocabulary or a 1,000 to 5,000 word SLS systems.
• We have designed eight special purpose VLSI chips for the HMM board, six chips at U.C. Berkeley for HMM beam search and viterbi processing, and two chips at SRI for interfacing to the grammar board.
| false |
[] |
[] | null | null | null | null |
1990
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.