ID
stringlengths 11
54
| url
stringlengths 33
64
| title
stringlengths 11
184
| abstract
stringlengths 17
3.87k
⌀ | label_nlp4sg
bool 2
classes | task
list | method
list | goal1
stringclasses 9
values | goal2
stringclasses 9
values | goal3
stringclasses 1
value | acknowledgments
stringlengths 28
1.28k
⌀ | year
stringlengths 4
4
| sdg1
bool 1
class | sdg2
bool 1
class | sdg3
bool 2
classes | sdg4
bool 2
classes | sdg5
bool 2
classes | sdg6
bool 1
class | sdg7
bool 1
class | sdg8
bool 2
classes | sdg9
bool 2
classes | sdg10
bool 2
classes | sdg11
bool 2
classes | sdg12
bool 1
class | sdg13
bool 2
classes | sdg14
bool 1
class | sdg15
bool 1
class | sdg16
bool 2
classes | sdg17
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
pogodalla-2000-generation-lambek
|
https://aclanthology.org/C00-2091
|
Generation, Lambek Calculus, Montague's Semantics and Semantic Proof Nets
|
Most of the studies in the framework of Lambek calculus have considered the parsing process and ignored the generation process. This paper wants to rely on the close link between Lambek calculus and linear logic to present a method for the generation process with semantic proof nets. We express the process as a proof search procedure based on a graph calculus and the solutions appear as a matrix computation preserving the decidability properties, and we characterize a polynomial time case.
| false |
[] |
[] | null | null | null |
I would like to thank Christian Retoré who pointed out to me Girard's algebraic interpretation of the cut elimination, and the anonymous reviewers for their helpful comments.
|
2000
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
yang-etal-2021-multilingual
|
https://aclanthology.org/2021.acl-short.31
|
Multilingual Agreement for Multilingual Neural Machine Translation
|
Although multilingual neural machine translation (MNMT) enables multiple language translations, the training process is based on independent multilingual objectives. Most multilingual models can not explicitly exploit different language pairs to assist each other, ignoring the relationships among them. In this work, we propose a novel agreement-based method to encourage multilingual agreement among different translation directions, which minimizes the differences among them. We combine the multilingual training objectives with the agreement term by randomly substituting some fragments of the source language with their counterpart translations of auxiliary languages. To examine the effectiveness of our method, we conduct experiments on the multilingual translation task of 10 language pairs. Experimental results show that our method achieves significant improvements over the previous multilingual baselines.
| false |
[] |
[] | null | null | null |
This work was supported in part by the National Natural Science Foundation of China (Grant Nos.U1636211, 61672081, 61370126), the 2020 Tencent WeChat Rhino-Bird Focused Research Program, and the Fund of the State Key Laboratory of Software Development Environment (Grant No.SKLSDE2019ZX-17).
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nguyen-etal-2020-vietnamese
|
https://aclanthology.org/2020.coling-main.233
|
A Vietnamese Dataset for Evaluating Machine Reading Comprehension
|
Over 97 million people speak Vietnamese as their native language in the world. However, there are few research studies on machine reading comprehension (MRC) for Vietnamese, the task of understanding a text and answering questions related to it. Due to the lack of benchmark datasets for Vietnamese, we present the Vietnamese Question Answering Dataset (UIT-ViQuAD), a new dataset for the low-resource language as Vietnamese to evaluate MRC models. This dataset comprises over 23,000 human-generated question-answer pairs based on 5,109 passages of 174 Vietnamese articles from Wikipedia. In particular, we propose a new process of dataset creation for Vietnamese MRC. Our in-depth analyses illustrate that our dataset requires abilities beyond simple reasoning like word matching and demands single-sentence and multiple-sentence inferences. Besides, we conduct experiments on state-of-the-art MRC methods for English and Chinese as the first experimental models on UIT-ViQuAD. We also estimate human performance on the dataset and compare it to the experimental results of powerful machine learning models. As a result, the substantial differences between human performance and the best model performance on the dataset indicate that improvements can be made on UIT-ViQuAD in future research. Our dataset is freely available on our website 1 to encourage the research community to overcome challenges in Vietnamese MRC.
| false |
[] |
[] | null | null | null |
We would like to thank the reviewers' comments which are helpful for improving the quality of our work. In addition, we would like to thank our workers for their cooperation.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chudy-etal-2013-tmuse
|
https://aclanthology.org/I13-2011
|
Tmuse: Lexical Network Exploration
|
We demonstrate an online application to explore lexical networks. Tmuse displays a 3D interactive graph of similar words, whose layout is based on the proxemy between vertices of synonymy and translation networks. Semantic themes of words related to a query are outlined, and projected across languages. The application is useful as, for example, a writing assistance. It is available, online, for Mandarin Chinese, English and French, as well as the corresponding language pairs, and can easily be fitted to new resources.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chu-etal-2020-solving
|
https://aclanthology.org/2020.emnlp-main.471
|
Solving Historical Dictionary Codes with a Neural Language Model
|
We solve difficult word-based substitution codes by constructing a decoding lattice and searching that lattice with a neural language model. We apply our method to a set of enciphered letters exchanged between US Army General James Wilkinson and agents of the Spanish Crown in the late 1700s and early 1800s, obtained from the US Library of Congress. We are able to decipher 75.1% of the cipher-word tokens correctly. Table-based key Book-based key Cipher Caesar cipher Beale cipher (character) Simple substitution Zodiac 408 Copiale cipher Code Rossignols' Mexico-Nauen code (word) Grand Chiffre Scovell code Wilkinson code
| false |
[] |
[] | null | null | null |
We would like to thank Johnny Fountain and Kevin Chatupornpitak of Karga7, and the staff who transcribed data from the Library of Congress, who provided scans of the original documents. We would also like to thank the anonymous reviewers for many helpful suggestions.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
macklovitch-1992-tagger
|
https://aclanthology.org/1992.tmi-1.10
|
Where the tagger falters
|
Statistical n-gram taggers like that of [Church 1988] or [Foster 1991] assign a part-ofspeech label to each word in a text on the basis of probability estimates that are automatically derived from a large, already tagged training corpus. This paper examines the grammatical constructions which cause such taggers to falter most frequently. As one would expect, certain of these errors are due to linguistic dependencies that extend beyond the limited scope of statistical taggers, while others can be seen to derive from the composition of the tag set; many can only be corrected through a full syntactic or semantic analysis of the sentence. The paper goes on to consider two very different approaches to the problem of automatically detecting tagging errors. The first uses statistical information that is already at the tagger's disposal; the second attempts to isolate error-prone contexts by formulating linguistic diagnostics in terms of regular expressions over tag sequences. In a small experiment focussing on the preterite/past participle ambiguity, the linguistic technique turns out to be more efficient, while the statistical technique is more effective.
| false |
[] |
[] | null | null | null |
This paper is based on George Foster's excellent Master's thesis. I am indebted to him, both for his explanations of points in the thesis and for kindly providing me with the supplementary data on error frequencies. All responsibility for errors of interpretation is mine alone. Pierre Isabelle, Michel Simard, Marc Dymetman and Marie-Louise Hannan all provided comments on an earlier version of this paper, for which I also express my gratitude.Notes
|
1992
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
yang-etal-2008-resolving
|
https://aclanthology.org/I08-2098
|
Resolving Ambiguities of Chinese Conjunctive Structures by Divide-and-conquer Approaches
|
This paper presents a method to enhance a Chinese parser in parsing conjunctive structures. Long conjunctive structures cause long-distance dependencies and tremendous syntactic ambiguities. Pure syntactic approaches hardly can determine boundaries of conjunctive phrases properly. In this paper, we propose a divide-andconquer approach which overcomes the difficulty of data-sparseness of the training data and uses both syntactic symmetry and semantic reasonableness to evaluate ambiguous conjunctive structures. In comparing with the performances of the PCFG parser without using the divide-andconquer approach, the precision of the conjunctive boundary detection is improved from 53.47% to 83.17%, and the bracketing f-score of sentences with conjunctive structures is raised up about 11 %.
| false |
[] |
[] | null | null | null |
This research was supported in part by National Digital Archives Program (NDAP, Taiwan) sponsored by the National Science Council of Taiwan under NSC Grants: NSC95-2422-H-001-031-.
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
blom-1998-statistical
|
https://aclanthology.org/W98-1617
|
A statistical and structural approach to extracting collocations likely to be of relevance in relation to an LSP sub-domain text
|
D e p a rtm e n t o f L e x ic o g ra p h y a n d C o m p u ta tio n a l L in g u istic s T h e A a rh u s B u s in e s s S ch o o l b b @ ln g .h h a .d k
| false |
[] |
[] | null | null | null | null |
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hieber-riezler-2015-bag
|
https://aclanthology.org/N15-1123
|
Bag-of-Words Forced Decoding for Cross-Lingual Information Retrieval
|
Current approaches to cross-lingual information retrieval (CLIR) rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. In this paper, we present an attempt to turn this situation on its head: Instead of the retrieval aspect, we emphasize the translation component in CLIR. We perform search by using an SMT decoder in forced decoding mode to produce a bag-ofwords representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. We find significant gains over the state-of-the-art in a large-scale evaluation on cross-lingual search in the domains patents and Wikipedia.
| false |
[] |
[] | null | null | null |
This research was supported in part by DFG grant RI-2221/1-2 "Weakly Supervised Learning of Cross-Lingual Systems".
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
andreevskaia-bergler-2008-specialists
|
https://aclanthology.org/P08-1034
|
When Specialists and Generalists Work Together: Overcoming Domain Dependence in Sentiment Tagging
|
This study presents a novel approach to the problem of system portability across different domains: a sentiment annotation system that integrates a corpus-based classifier trained on a small set of annotated in-domain data and a lexicon-based system trained on Word-Net. The paper explores the challenges of system portability across domains and text genres (movie reviews, news, blogs, and product reviews), highlights the factors affecting system performance on out-of-domain and smallset in-domain data, and presents a new system consisting of the ensemble of two classifiers with precision-based vote weighting, that provides significant gains in accuracy and recall over the corpus-based classifier and the lexicon-based system taken individually.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
prickett-etal-2018-seq2seq
|
https://aclanthology.org/W18-5810
|
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
|
Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.
| false |
[] |
[] | null | null | null |
The authors would like to thank the members of the UMass Sound Workshop, the members of the UMass NLP Reading Group, Tal Linzen, and Ryan Cotterell for helpful feedback and discussion. Additionally, we would like to thank the SIGMORPHON reviewers for their comments. This work was supported by NSF Grant #1650957.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
simoes-etal-2016-enriching
|
https://aclanthology.org/L16-1426
|
Enriching a Portuguese WordNet using Synonyms from a Monolingual Dictionary
|
In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese.
| false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
levin-2018-annotation
|
https://aclanthology.org/W18-4901
|
Annotation Schemes for Surface Construction Labeling
|
In this talk I will describe the interaction of linguistics and language technologies in Surface Construction Labeling (SCL) from the perspective of corpus annotation tasks such as definiteness, modality, and causality. Linguistically, following Construction Grammar, SCL recognizes that meaning may be carried by morphemes, words, or arbitrary constellations of morpho-lexical elements. SCL is like Shallow Semantic Parsing in that it does not attempt a full compositional analysis of meaning, but rather identifies only the main elements of a semantic frame, where the frames may be invoked by constructions as well as lexical items. Computationally, SCL is different from tasks such as information extraction in that it deals only with meanings that are expressed in a conventional, grammaticalized way and does not address inferred meanings. I review the work of Dunietz (2018) on the labeling of causal frames including causal connectives and cause and effect arguments. I will describe how to design an annotation scheme for SCL, including isolating basic units of form and meaning and building a "constructicon". I will conclude with remarks about the nature of universal categories and universal meaning representations in language technologies. This talk describes joint work with
| false |
[] |
[] | null | null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
iwamoto-yukawa-2020-rijp
|
https://aclanthology.org/2020.semeval-1.10
|
RIJP at SemEval-2020 Task 1: Gaussian-based Embeddings for Semantic Change Detection
|
This paper describes the model proposed and submitted by our RIJP team to SemEval 2020 Task1: Unsupervised Lexical Semantic Change Detection. In the model, words are represented by Gaussian distributions. For Subtask 1, the model achieved average scores of 0.51 and 0.70 in the evaluation and post-evaluation processes, respectively. The higher score in the post-evaluation process than that in the evaluation process was achieved owing to appropriate parameter tuning. The results indicate that the proposed Gaussian-based embedding model is able to express semantic shifts while having a low computational complexity.
| false |
[] |
[] | null | null | null |
We gratefully acknowledge Kwangjin Jeong for valuable discussions and the anonymous reviewers for useful comments.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
jones-1994-exploring
|
https://aclanthology.org/C94-1069
|
Exploring the Role of Punctuation in Parsing Natural Text
|
Few, if any, current NLP systems make any significant use of punctuation. Intuitively, a treatment of lrunctuation seems necessary to the analysis and production of text. Whilst this has been suggested in the fiekls of discourse strnetnre, it is still nnclear whether punctuation can help in the syntactic field. This investigation atteml)ts to answer this question by parsing some corpus-based material with two similar grammars-one including rules for i)unctuation, the other igno,'ing it. The punctuated grammar significantly outq)erforms the unpunctnated on% and so the conclnsion is that punctuation can play a usefifl role in syntactic processing.
| false |
[] |
[] | null | null | null |
This work was carried out under Esprit Acquilex-II, lIRA 7315, and an ESRC l/,eseareh Stndentship, 1/.004293:1,1171. '['hanks tbr instrtetive and helpful comments to Ted Briseoe, John Carroll, Rol)ert Dale, Ilenry 'Fhompson and anonymous CoLing reviewers.
|
1994
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wachsmuth-etal-2018-argumentation
|
https://aclanthology.org/C18-1318
|
Argumentation Synthesis following Rhetorical Strategies
|
Persuasion is rarely achieved through a loose set of arguments alone. Rather, an effective delivery of arguments follows a rhetorical strategy, combining logical reasoning with appeals to ethics and emotion. We argue that such a strategy means to select, arrange, and phrase a set of argumentative discourse units. In this paper, we model rhetorical strategies for the computational synthesis of effective argumentation. In a study, we let 26 experts synthesize argumentative texts with different strategies for 10 topics. We find that the experts agree in the selection significantly more when following the same strategy. While the texts notably vary for different strategies, especially their arrangement remains stable. The results suggest that our model enables a strategical synthesis.
| false |
[] |
[] | null | null | null |
Acknowledgments Thanks to Yamen Ajjour, Wei-Fan Chen, Yulia Clausen, Debopam Das, Erdan Genc, Tim Gollub, Yulia Grishina, Erik Hägert, Johannes Kiesel, Lukas Paschen, Martin Potthast, Robin Schäfer, Constanze Schmitt, Uladzimir Sidarenka, Shahbaz Syed, and Michael Völske for taking part in our study.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
guo-etal-2017-effective
|
https://aclanthology.org/E17-1011
|
Which is the Effective Way for Gaokao: Information Retrieval or Neural Networks?
|
As one of the most important test of China, Gaokao is designed to be difficult enough to distinguish the excellent high school students. In this work, we detailed the Gaokao History Multiple Choice Questions(GKHMC) and proposed two different approaches to address them using various resources. One approach is based on entity search technique (IR approach), the other is based on text entailment approach where we specifically employ deep neural networks(NN approach). The result of experiment on our collected real Gaokao questions showed that they are good at different categories of questions, i.e. IR approach performs much better at entity questions(EQs) while NN approach shows its advantage on sentence questions(SQs). Our new method achieves state-of-the-art performance and show that it's indispensable to apply hybrid method when participating in the real-world tests.
| true |
[] |
[] |
Quality Education
| null | null |
We thank for the anonymous reviewers for helpful comments. This work was supported by the National High Technology Development 863 Program of China (No.2015AA015405) and the Natural Science Foundation of China (No.61533018). And this research work was also supported by Google through focused research awards program.
|
2017
| false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false |
blekhman-etal-1997-pars
|
https://aclanthology.org/1997.mtsummit-papers.16
|
PARS/U for Windows: The World's First Commercial English-Ukrainian and Ukrainian-English Machine Translation System
|
The paper describes the PARS/U Ukrainian-English bidirectional MT system by Lingvistica '93 Co. PARS/U translates MS Word and HTML files as well as screen Helps. It features an easy-to-master dictionary updating program, which permits the user to customize the system by means of running subject-area oriented texts through the MT engine. PARS/U is marketed in Ukraine and North America.
| false |
[] |
[] | null | null | null | null |
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ding-etal-2014-using
|
https://aclanthology.org/D14-1148
|
Using Structured Events to Predict Stock Price Movement: An Empirical Investigation
|
It has been shown that news events influence the trends of stock price movements. However, previous work on news-driven stock market prediction rely on shallow features (such as bags-of-words, named entities and noun phrases), which do not capture structured entity-relation information, and hence cannot represent complete and exact events. Recent advances in Open Information Extraction (Open IE) techniques enable the extraction of structured events from web-scale data. We propose to adapt Open IE technology for event-based stock price movement prediction, extracting structured events from large-scale public news without manual efforts. Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market. Largescale experiments show that the accuracy of S&P 500 index prediction is 60%, and that of individual stock prediction can be over 70%. Our event-based system outperforms bags-of-words-based baselines, and previously reported systems trained on S&P 500 stock historical data.
| false |
[] |
[] | null | null | null |
We thank the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the National Basic Research Program (973 Program) of China via Grant 2014CB340503, the National Natural Science Foundation of China (NSFC) via Grant 61133012 and 61202277, the Singapore Ministry of Education (MOE) AcRF Tier 2 grant T2MOE201301 and SRG ISTD 2012 038 from Singapore University of Technology and Design. We are very grateful to Ji Ma for providing an implementation of the neural network algorithm.
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
deleger-etal-2014-annotation
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/552_Paper.pdf
|
Annotation of specialized corpora using a comprehensive entity and relation scheme
|
Annotated corpora are essential resources for many applications in Natural Language Processing. They provide insight on the linguistic and semantic characteristics of the genre and domain covered, and can be used for the training and evaluation of automatic tools. In the biomedical domain, annotated corpora of English texts have become available for several genres and subfields. However, very few similar resources are available for languages other than English. In this paper we present an effort to produce a high-quality corpus of clinical documents in French, annotated with a comprehensive scheme of entities and relations. We present the annotation scheme as well as the results of a pilot annotation study covering 35 clinical documents in a variety of subfields and genres. We show that high inter-annotator agreement can be achieved using a complex annotation scheme.
| false |
[] |
[] | null | null | null |
This work was supported by the French National Agency for Research under grants CABeRneT 3 (ANR-13-JCJC) and Accordys 4 (ANR-12-CORD-007-03).
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hinrichs-etal-2010-weblicht-web
|
https://aclanthology.org/P10-4005
|
WebLicht: Web-Based LRT Services for German
|
This software demonstration presents WebLicht (short for: Web-Based Linguistic Chaining Tool), a webbased service environment for the integration and use of language resources and tools (LRT). WebLicht is being developed as part of the D-SPIN project 1. We-bLicht is implemented as a web application so that there is no need for users to install any software on their own computers or to concern themselves with the technical details involved in building tool chains. The integrated web services are part of a prototypical infrastructure that was developed to facilitate chaining of LRT services. WebLicht allows the integration and use of distributed web services with standardized APIs. The nature of these open and standardized APIs makes it possible to access the web services from nearly any programming language, shell script or workflow engine (UIMA, Gate etc.) Additionally, an application for integration of additional services is available, allowing anyone to contribute his own web service.
| false |
[] |
[] | null | null | null |
WebLicht is the product of a combined effort within the D-SPIN projects (www.d-spin.org). Currently, partners include:Seminar für Sprachwissenschaft/Computerlinguistik, Universität Tübingen, Abteilung für Automatische Sprachverarbeitung, Universität Leipzig, Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart and Berlin Brandenburgische Akademie der Wissenschaften.
|
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
huang-etal-2014-sentence
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/60_Paper.pdf
|
Sentence Rephrasing for Parsing Sentences with OOV Words
|
This paper addresses the problems of out-of-vocabulary (OOV) words, named entities in particular, in dependency parsing. The OOV words, whose word forms are unknown to the learning-based parser, in a sentence may decrease the parsing performance. To deal with this problem, we propose a sentence rephrasing approach to replace each OOV word in a sentence with a popular word of the same named entity type in the training set, so that the knowledge of the word forms can be used for parsing. The highest-frequency-based rephrasing strategy and the information-retrieval-based rephrasing strategy are explored to select the word to replace, and the Chinese Treebank 6.0 (CTB6) corpus is adopted to evaluate the feasibility of the proposed sentence rephrasing strategies. Experimental results show that rephrasing some specific types of OOV words such as Corporation, Organization, and Competition increases the parsing performances. This methodology can be applied to domain adaptation to deal with OOV problems.
| false |
[] |
[] | null | null | null |
This research was partially supported by National Science Council, Taiwan under NSC101-2221-E-002-195-MY3.
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
himmel-1998-visualization
|
https://aclanthology.org/W98-0205
|
Visualization for Large Collections of Multimedia Information
|
Organizations that make use of large amounts of multimedia material (especially images and video) require easy access to such information. Recent developments in computer hardware and algorithm design have made possible content indexing of digital video information and efficient display of 3D data representations. This paper describes collaborative work between Boeing Applied Research & Technology (AR&T), Carnegie Mellon University (CMU), and the Battelle Pacific Northwest National Laboratories (PNNL). to integrate media indexing with computer visualization to achieve effective content-based access to video information. Text metadata, representing video content, was extracted from the CMU Informedia system, processed by AR&T's text analysis software, and presented to users via PNNL's Starlight 3D visualization system. This approach shows how to make multimedia information accessible to a text-based visualization system, facilitating a global view of large collections of such data. We evaluated our approach by making several experimental queries against a library of eight hours of video segmented into several hundred "video paragraphs." We conclude that search performance of Inforrnedia was enhanced in terms of ease of exploration by the integration with Starlight.
| false |
[] |
[] | null | null | null |
Our thanks go to Ricky Houghton and Bryan Maher at the Carnegie Mellon University Informedia project, and to John Risch, Scott Dowson, Brian Moon, and Bruce Rex at the Battelle Pacific Northwest National Laboratories Starlight project for their excellent work leading to this result. The Boeing team also includes Dean Billheimer, Andrew Booker, Fred Holt, Michelle Keim, Dan Pierce, and Jason Wu.
|
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
amoia-martinez-2013-using
|
https://aclanthology.org/W13-2711
|
Using Comparable Collections of Historical Texts for Building a Diachronic Dictionary for Spelling Normalization
|
In this paper, we argue that comparable collections of historical written resources can help overcoming typical challenges posed by heritage texts enhancing spelling normalization, POS-tagging and subsequent diachronic linguistic analyses. Thus, we present a comparable corpus of historical German recipes and show how such a comparable text collection together with the application of innovative MT inspired strategies allow us (i) to address the word form normalization problem and (ii) to automatically generate a diachronic dictionary of spelling variants. Such a diachronic dictionary can be used both for spelling normalization and for extracting new "translation" (word formation/change) rules for diachronic spelling variants. Moreover, our approach can be applied virtually to any diachronic collection of texts regardless of the time span they represent. A first evaluation shows that our approach compares well with state-of-art approaches.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vlachos-2006-active
|
https://aclanthology.org/W06-2209
|
Active Annotation
|
This paper introduces a semi-supervised learning framework for creating training material, namely active annotation. The main intuition is that an unsupervised method is used to initially annotate imperfectly the data and then the errors made are detected automatically and corrected by a human annotator. We applied active annotation to named entity recognition in the biomedical domain and encouraging results were obtained. The main advantages over the popular active learning framework are that no seed annotated data is needed and that the reusability of the data is maintained. In addition to the framework, an efficient uncertainty estimation for Hidden Markov Models is presented.
| false |
[] |
[] | null | null | null |
The author was funded by BBSRC, grant number 38688. I would like to thank Ted Briscoe and Bob Carpenter for their feedback and comments.
|
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
patry-etal-2006-mood
|
http://www.lrec-conf.org/proceedings/lrec2006/pdf/542_pdf.pdf
|
MOOD: A Modular Object-Oriented Decoder for Statistical Machine Translation
|
We present an Open Source framework called MOOD developed in order to facilitate the development of a Statistical Machine Translation Decoder. MOOD has been modularized using an object-oriented approach which makes it especially suitable for the fast development of state-of-the-art decoders. As a proof of concept, a clone of the PHARAOH decoder has been implemented and evaluated. This clone named RAMSES is part of the current distribution of MOOD.
| false |
[] |
[] | null | null | null | null |
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
feng-etal-2015-blcunlp
|
https://aclanthology.org/S15-2054
|
BLCUNLP: Corpus Pattern Analysis for Verbs Based on Dependency Chain
|
We implemented a syntactic and semantic tagging system for SemEval 2015 Task 15: Corpus Pattern Analysis. For syntactic tagging, we present a Dependency Chain Search Algorithm that is found to be effective at identifying structurally distant subjects and objects. Other syntactic labels are identified using rules defined over dependency parse structures and the output of a verb classification module. Semantic tagging is performed using a simple lexical mapping table combined with postprocessing rules written over phrase structure constituent types and named entity information. The final score of our system is 0.530 F1, ranking second in this task.
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their helpful suggestions and comments. The research work is funded by the Natural Science Foundation of China (No.61300081, 61170162), and the Fundamental Research Funds for the Central Universities in BLCU (No. 15YJ030006).
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2018-semi-autoregressive
|
https://aclanthology.org/D18-1044
|
Semi-Autoregressive Neural Machine Translation
|
Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58× speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their valuable comments. We also thank Wenfu Wang, Hao Wang for helpful discussion and Linhao Dong, Jinghao Niu for their help in paper writting.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
townsend-etal-2014-university
|
https://aclanthology.org/S14-2136
|
University\_of\_Warwick: SENTIADAPTRON - A Domain Adaptable Sentiment Analyser for Tweets - Meets SemEval
|
We give a brief overview of our system, SentiAdaptron, a domain-sensitive and domain adaptable system for twitter analysis in tweets, and discuss performance on SemEval (in both the constrained and unconstrained scenarios), as well as implications arising from comparing the intra-and inter-domain performance on our twitter corpus.
| false |
[] |
[] | null | null | null |
Warwick Research Development Fund grant RD13129 provided funding for crowdsourced annotations. We thank our partners at CUSP, NYU for enabling us to use Amazon Mechanical Turk for this process.
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rim-etal-2020-interchange
|
https://aclanthology.org/2020.lrec-1.893
|
Interchange Formats for Visualization: LIF and MMIF
|
Promoting interoperrable computational linguistics (CL) and natural language processing (NLP) application platforms and interchangeable data formats have contributed improving discoverabilty and accessbility of the openly available NLP software. In this paper, we discuss the enhanced data visualization capabilities that are also enabled by inter-operating NLP pipelines and interchange formats. For adding openly available visualization tools and graphical annotation tools to the Language Applications Grid (LAPPS Grid) and Computational Linguistics Applications for Multimedia Services (CLAMS) toolboxes, we have developed interchange formats that can carry annotations and metadata for text and audiovisual source data. We descibe those data formats and present case studies where we successfully adopt open-source visualization tools and combine them with CL tools.
| false |
[] |
[] | null | null | null |
We would like to thank the reviewers for their helpful comments. This work was supported by a grant from the National Science Foundation to Brandeis University and Vassar University, and by a grant from the Andrew W. Mellon Foundation to Brandeis University. The points of view expressed herein are solely those of the authors and do not represent the views of the NSF or the Andrew W. Mellon Foundation. Any errors or omissions are, of course, the responsibility of the authors.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
michou-seretan-2009-tool
|
https://aclanthology.org/E09-2012
|
A Tool for Multi-Word Expression Extraction in Modern Greek Using Syntactic Parsing
|
This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system.
| false |
[] |
[] | null | null | null |
This work has been supported by the Swiss National Science Foundation (grant 100012-117944). The authors would like to thank Eric Wehrli for his support and useful comments.
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2013-novel-classifier
|
https://aclanthology.org/P13-2086
|
A Novel Classifier Based on Quantum Computation
|
In this article, we propose a novel classifier based on quantum computation theory. Different from existing methods, we consider the classification as an evolutionary process of a physical system and build the classifier by using the basic quantum mechanics equation. The performance of the experiments on two datasets indicates feasibility and potentiality of the quantum classifier.
| false |
[] |
[] | null | null | null |
This work was supported by the National Natural Science Foundation in China 61171114
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
aktas-etal-2020-adapting
|
https://aclanthology.org/2020.findings-emnlp.222
|
Adapting Coreference Resolution to Twitter Conversations
|
The performance of standard coreference resolution is known to drop significantly on Twitter texts. We improve the performance of the (Lee et al., 2018) system, which is originally trained on OntoNotes, by retraining on manually-annotated Twitter conversation data. Further experiments by combining different portions of OntoNotes with Twitter data show that selecting text genres for the training data can beat the mere maximization of training data amount. In addition, we inspect several phenomena such as the role of deictic pronouns in conversational data, and present additional results for variant settings. Our best configuration improves the performance of the "out of the box" system by 21.6%.
| false |
[] |
[] | null | null | null |
We thank the anonymous reviewers for their helpful comments and suggestions. This work is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Projektnummer 317633480 -SFB 1287, Project A03.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mukund-srihari-2009-ne
|
https://aclanthology.org/W09-1609
|
NE Tagging for Urdu based on Bootstrap POS Learning
|
Part of Speech (POS) tagging and Named Entity (NE) tagging have become important components of effective text analysis. In this paper, we propose a bootstrapped model that involves four levels of text processing for Urdu. We show that increasing the training data for POS learning by applying bootstrapping techniques improves NE tagging results. Our model overcomes the limitation imposed by the availability of limited ground truth data required for training a learning model. Both our POS tagging and NE tagging models are based on the Conditional Random Field (CRF) learning approach. To further enhance the performance, grammar rules and lexicon lookups are applied on the final output to correct any spurious tag assignments. We also propose a model for word boundary segmentation where a bigram HMM model is trained for character transitions among all positions in each word. The generated words are further processed using a probabilistic language model. All models use a hybrid approach that combines statistical models with hand crafted grammar rules.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gotz-meurers-1995-compiling
|
https://aclanthology.org/P95-1012
|
Compiling HPSG type constraints into definite clause programs
|
We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.
| false |
[] |
[] | null | null | null |
The research reported here was carried out in the context of SFB 340, project B4, funded by the Deutsche Forschungsgemeinschaft. We would like to thank Dale Gerdemann, Paul John King and two anonymous referees for helpful discussion and comments.
|
1995
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ahn-frampton-2006-automatic
|
https://aclanthology.org/W06-2006
|
Automatic Generation of Translation Dictionaries Using Intermediary Languages
|
We describe a method which uses one or more intermediary languages in order to automatically generate translation dictionaries. Such a method could potentially be used to efficiently create translation dictionaries for language groups which have as yet had little interaction. For any given word in the source language, our method involves first translating into the intermediary language(s), then into the target language, back into the intermediary language(s) and finally back into the source language. The relationship between a word and the number of possible translations in another language is most often 1-to-many, and so at each stage, the number of possible translations grows exponentially. If we arrive back at the same starting point i.e. the same word in the source language, then we hypothesise that the meanings of the words in the chain have not diverged significantly. Hence we backtrack through the link structure to the target language word and accept this as a suitable translation. We have tested our method by using English as an intermediary language to automatically generate a Spanish-to-German dictionary, and the results are encouraging.
| false |
[] |
[] | null | null | null | null |
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zheng-etal-2021-low
|
https://aclanthology.org/2021.americasnlp-1.26
|
Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining
|
This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Asháninka, Guaraní, Wixarika, Náhuatl, Hñähñu, Quechua, Shipibo-Konibo, and Rarámuri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vilar-etal-2011-dfkis
|
https://aclanthology.org/2011.iwslt-evaluation.13
|
DFKI's SC and MT submissions to IWSLT 2011
|
We describe DFKI's submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission.
| false |
[] |
[] | null | null | null |
This work was done with the support of the TaraXŰ Project 9 , financed by TSB Technologiestiftung Berlin-Zukunftsfonds Berlin, co-financed by the European Union-European fund for regional development.
|
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
erk-pado-2009-paraphrase
|
https://aclanthology.org/W09-0208
|
Paraphrase Assessment in Structured Vector Space: Exploring Parameters and Datasets
|
The appropriateness of paraphrases for words depends often on context: "grab" can replace "catch" in "catch a ball", but not in "catch a cold". Structured Vector Space (SVS) (Erk and Padó, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates "best-practice" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
elhoseiny-elgammal-2015-visual
|
https://aclanthology.org/W15-2809
|
Visual Classifier Prediction by Distributional Semantic Embedding of Text Descriptions
|
One of the main challenges for scaling up object recognition systems is the lack of annotated images for real-world categories. It is estimated that humans can recognize and discriminate among about 30,000 categories (Biederman and others, 1987) . Typically there are few images available for training classifiers form most of these categories. This is reflected in the number of images per category available for training in most object categorization datasets, which, as pointed out in (Salakhutdinov et al., 2011) , shows a Zipf distribution.
The problem of lack of training images becomes even more sever when we target recognition problems within a general category, i.e., subordinate categorization, for example building classifiers for different bird species or flower types (estimated over 10000 living bird species, similar for flowers).
| false |
[] |
[] | null | null | null | null |
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
newman-griffis-etal-2019-classifying
|
https://aclanthology.org/W19-5001
|
Classifying the reported ability in clinical mobility descriptions
|
Assessing how individuals perform different activities is key information for modeling health states of individuals and populations. Descriptions of activity performance in clinical free text are complex, including syntactic negation and similarities to textual entailment tasks. We explore a variety of methods for the novel task of classifying four types of assertions about activity performance: Able, Unable, Unclear, and None (no information). We find that ensembling an SVM trained with lexical features and a CNN achieves 77.9% macro F1 score on our task, and yields nearly 80% recall on the rare Unclear and Unable samples. Finally, we highlight several challenges in classifying performance assertions, including capturing information about sources of assistance, incorporating syntactic structure and negation scope, and handling new modalities at test time. Our findings establish a strong baseline for this novel task, and identify intriguing areas for further research.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
The authors would like to thank Pei-Shu Ho, Jonathan Camacho Maldonado, and Maryanne Sacco for discussions about error analysis, and our anonymous reviewers for their helpful comments. This research was supported in part by the Intramural Research Program of the National Institutes of Health, Clinical Research Center and through an Inter-Agency Agreement with the US Social Security Administration.
|
2019
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wolf-gibson-2004-paragraph
|
https://aclanthology.org/P04-1049
|
Paragraph-, Word-, and Coherence-based Approaches to Sentence Ranking: A Comparison of Algorithm and Human Performance
|
Sentence ranking is a crucial part of generating text summaries. We compared human sentence rankings obtained in a psycholinguistic experiment to three different approaches to sentence ranking: A simple paragraph-based approach intended as a baseline, two word-based approaches, and two coherence-based approaches. In the paragraph-based approach, sentences in the beginning of paragraphs received higher importance ratings than other sentences. The word-based approaches determined sentence rankings based on relative word frequencies (Luhn (1958); Salton & Buckley (1988)). Coherence-based approaches determined sentence rankings based on some property of the coherence structure of a text (Marcu (2000); Page et al. (1998)). Our results suggest poor performance for the simple paragraph-based approach, whereas wordbased approaches perform remarkably well. The best performance was achieved by a coherence-based approach where coherence structures are represented in a non-tree structure. Most approaches also outperformed the commercially available MSWord summarizer.
| false |
[] |
[] | null | null | null | null |
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
penton-bird-2004-representing
|
https://aclanthology.org/U04-1017
|
Representing and Rendering Linguistic Paradigms
|
Linguistic forms are inherently multi-dimensional. They exhibit a variety of phonological, orthographic, morphosyntactic, semantic and pragmatic properties. Accordingly, linguistic analysis involves multidimensional exploration, a process in which the same collection of forms is laid out in many ways until clear patterns emerge. Equally, language documentation usually contains tabulations of linguistic forms to illustrate systematic patterns and variations. In all such cases, multi-dimensional data is projected onto a two-dimensional table known as a linguistic paradigm, the most widespread format for linguistic data presentation. In this paper we develop an XML data model for linguistic paradigms, and show how XSL transforms can render them. We describe a high-level interface which gives linguists flexible, high-level control of paradigm layout. The work provides a simple, general, and extensible model for the preservation and access of linguistic data.
| false |
[] |
[] | null | null | null |
This paper extends earlier work by (Penton et al., 2004) . This research has been supported by the National Science Foundation grant number 0317826 Querying Linguistic Databases.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tanasijevic-etal-2012-multimedia
|
http://www.lrec-conf.org/proceedings/lrec2012/pdf/637_Paper.pdf
|
Multimedia database of the cultural heritage of the Balkans
|
This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.
| false |
[] |
[] | null | null | null | null |
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rehm-etal-2013-matecat
|
https://aclanthology.org/2013.mtsummit-european.10
|
MATECAT: Machine Translation Enhanced Computer Assisted Translation META - Multilingual Europe Technology Alliance
|
MateCat is a EU-funded research project (FP7-ICT-2011-7 grant 287688) that aims at improving the integration of machine translation (MT) and human translation within the so-called computer aided translation (CAT) framework.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
taylor-1990-multimedia
|
https://aclanthology.org/1990.tc-1.18
|
Multimedia/Multilanguage publishing for the 1990s
|
Kent Taylor AT&T Document Development Organisation, Winston-Salem, USA AT&T is a global information management and movement enterprise, providing computer and telecommunications products and services all over the world. The global nature of the business, combined with rapidly changing technology call for innovative approaches to publishing and distributing support documentation. AT&T's Document Development Organisation (DDO) is implementing new systems and processes to meet these needs.
Headquartered in Winston-Salem, North Carolina, DDO created over 750,000 original pages of product/service documentation in 1990. While this is already a huge number, the volume of information we produce is increasing at a rate of more than 25 per cent per year! As the volume increases, the demand for information in electronic form also increases rapidly; and more and more of this information must be translated each year to other languages. And it is unlikely that this 'information explosion', or the demand for more efficient methods of distributing and using it, will abate in the near future. In fact, all indications are that current trends will only accelerate. DDO responded to these demands by implementing an 'object oriented' publishing process. Writers focus on documentation content and structure, using generalised markup, to develop neutral form content models. Form, format and functionality are added to the content in our production systems, via electronic 'style sheets'. Different production systems and Figure 6 . style sheets produce a variety of traditional paper documents and serverbased, PC-based and CD-ROM-based 'electronic documents'.
| false |
[] |
[] | null | null | null | null |
1990
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhao-etal-2021-good
|
https://aclanthology.org/2021.emnlp-main.537
|
It Is Not As Good As You Think! Evaluating Simultaneous Machine Translation on Interpretation Data
|
Most existing simultaneous machine translation (SiMT) systems are trained and evaluated on offline translation corpora. We argue that SiMT systems should be trained and tested on real interpretation data. To illustrate this argument, we propose an interpretation test set and conduct a realistic evaluation of SiMT trained on offline translations. Our results, on our test set along with 3 existing smaller scale language pairs, highlight the difference of up-to 13.83 BLEU score when SiMT models are evaluated on translation vs interpretation data. In the absence of interpretation training data, we propose a translationto-interpretation (T2I) style transfer method which allows converting existing offline translations into interpretation-style data, leading to up-to 2.8 BLEU improvement. However, the evaluation gap remains notable, calling for constructing large-scale interpretation corpora better suited for evaluating and developing SiMT systems. 1
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ladhak-etal-2020-wikilingua
|
https://aclanthology.org/2020.findings-emnlp.360
|
WikiLingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization
|
We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of crosslingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow 12 , a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard articlesummary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct crosslingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.
| false |
[] |
[] | null | null | null |
We would like to thank Chris Kedzie and the anonymous reviewers for their feedback. This research is based on work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract FA8650-17-C-9117. This work is also supported in part by National Science Foundation (NSF) grant 1815455 and Defense Advanced Research Projects Agency (DARPA) LwLL FA8750-19-2-0039. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, NSF, DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lebanoff-etal-2021-semantic
|
https://aclanthology.org/2021.adaptnlp-1.25
|
Semantic Parsing of Brief and Multi-Intent Natural Language Utterances
|
Many military communication domains involve rapidly conveying situation awareness with few words. Converting natural language utterances to logical forms in these domains is challenging, as these utterances are brief and contain multiple intents. In this paper, we present a first effort toward building a weakly-supervised semantic parser to transform brief, multi-intent natural utterances into logical forms. Our findings suggest a new "projection and reduction" method that iteratively performs projection from natural to canonical utterances followed by reduction of natural utterances is the most effective. We conduct extensive experiments on two military and a general-domain dataset and provide a new baseline for future research toward accurate parsing of multi-intent utterances.
| false |
[] |
[] | null | null | null |
This research is based upon work supported by the Naval Air Warfare Center Training Systems Division and the Department of the Navy's Small Business Innovation Research (SBIR) Program, contract N68335-19-C-0052. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Department of the Navy or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zheng-etal-2010-growing
|
https://aclanthology.org/P10-3009
|
Growing Related Words from Seed via User Behaviors: A Re-Ranking Based Approach
|
Motivated by Google Sets, we study the problem of growing related words from a single seed word by leveraging user behaviors hiding in user records of Chinese input method. Our proposed method is motivated by the observation that the more frequently two words cooccur in user records, the more related they are. First, we utilize user behaviors to generate candidate words. Then, we utilize search engine to enrich candidate words with adequate semantic features. Finally, we reorder candidate words according to their semantic relatedness to the seed word. Experimental results on a Chinese input method dataset show that our method gains better performance.
| false |
[] |
[] | null | null | null |
We thank Xiance Si and Wufeng Ke for providing the Baidu encyclopedia corpus for evaluation. We also thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by a Tsinghua-Sogou joint research project.
|
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chang-kou-1988-new
|
https://aclanthology.org/O88-1005
|
A New Approach to Quality Text Generation
| null | false |
[] |
[] | null | null | null | null |
1988
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gosangi-etal-2021-use
|
https://aclanthology.org/2021.naacl-main.359
|
On the Use of Context for Predicting Citation Worthiness of Sentences in Scholarly Articles
|
In this paper, we study the importance of context in predicting the citation worthiness of sentences in scholarly articles. We formulate this problem as a sequence labeling task solved using a hierarchical BiLSTM model. We contribute a new benchmark dataset containing over two million sentences and their corresponding labels. We preserve the sentence order in this dataset and perform document-level train/test splits, which importantly allows incorporating contextual information in the modeling process. We evaluate the proposed approach on three benchmark datasets. Our results quantify the benefits of using context and contextual embeddings for citation worthiness. Lastly, through error analysis, we provide insights into cases where context plays an essential role in predicting citation worthiness. Conclusion 0.
| true |
[] |
[] |
Industry, Innovation and Infrastructure
| null | null | null |
2021
| false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
tubay-costa-jussa-2018-neural
|
https://aclanthology.org/W18-6449
|
Neural Machine Translation with the Transformer and Multi-Source Romance Languages for the Biomedical WMT 2018 task
|
The Transformer architecture has become the state-of-the-art in Machine Translation. This model, which relies on attention-based mechanisms, has outperformed previous neural machine translation architectures in several tasks. In this system description paper, we report details of training neural machine translation with multi-source Romance languages with the Transformer model and in the evaluation frame of the biomedical WMT 2018 task. Using multi-source languages from the same family allows improvements of over 6 BLEU points.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
Authors would like to thank Noe Casas for his valuable comments. This work is supported in
|
2018
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
yi-etal-2007-semantic
|
https://aclanthology.org/N07-1069
|
Can Semantic Roles Generalize Across Genres?
|
PropBank has been widely used as training data for Semantic Role Labeling. However, because this training data is taken from the WSJ, the resulting machine learning models tend to overfit on idiosyncrasies of that text's style, and do not port well to other genres. In addition, since PropBank was designed on a verb-by-verb basis, the argument labels Arg2-Arg5 get used for very diverse argument roles with inconsistent training instances. For example, the verb "make" uses Arg2 for the "Material" argument; but the verb "multiply" uses Arg2 for the "Extent" argument. As a result, it can be difficult for automatic classifiers to learn to distinguish arguments Arg2-Arg5. We have created a mapping between PropBank and VerbNet that provides a VerbNet thematic role label for each verb-specific PropBank label. Since VerbNet uses argument labels that are more consistent across verbs, we are able to demonstrate that these new labels are easier to learn.
| false |
[] |
[] | null | null | null | null |
2007
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
herbelot-vecchi-2015-building
|
https://aclanthology.org/D15-1003
|
Building a shared world: mapping distributional to model-theoretic semantic spaces
|
In this paper, we introduce an approach to automatically map a standard distributional semantic space onto a set-theoretic model. We predict that there is a functional relationship between distributional information and vectorial concept representations in which dimensions are predicates and weights are generalised quantifiers. In order to test our prediction, we learn a model of such relationship over a publicly available dataset of feature norms annotated with natural language quantifiers. Our initial experimental results show that, at least for domain-specific data, we can indeed map between formalisms, and generate high-quality vector representations which encapsulate set overlap information. We further investigate the generation of natural language quantifiers from such vectors.
| false |
[] |
[] | null | null | null |
We thank Marco Baroni, Stephen Clark, Ann Copestake and Katrin Erk for their helpful comments on a previous version of this paper, and the three anonymous reviewers for their thorough feedback on this work. Eva Maria Vecchi is supported by ERC Starting Grant DisCoTex (306920).
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2019-aspect
|
https://aclanthology.org/P19-1345
|
Aspect Sentiment Classification Towards Question-Answering with Reinforced Bidirectional Attention Network
|
In the literature, existing studies on aspect sentiment classification (ASC) focus on individual non-interactive reviews. This paper extends the research to interactive reviews and proposes a new research task, namely Aspect Sentiment Classification towards Question-Answering (ASC-QA), for real-world applications. This new task aims to predict sentiment polarities for specific aspects from interactive QA style reviews. In particular, a high-quality annotated corpus is constructed for ASC-QA to facilitate corresponding research. On this basis, a Reinforced Bidirectional Attention Network (RBAN) approach is proposed to address two inherent challenges in ASC-QA, i.e., semantic matching between question and answer, and data noise. Experimental results demonstrate the great advantage of the proposed approach to ASC-QA against several state-of-the-art baselines.
| false |
[] |
[] | null | null | null |
We thank our anonymous reviewers for their helpful comments. This work was supported by three NSFC grants, i.e., No.61672366, No.61702149 and No.61525205. This work was also supported by the joint research project of Alibaba Group and Soochow University.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ghaeini-etal-2018-dependent
|
https://aclanthology.org/C18-1282
|
Dependent Gated Reading for Cloze-Style Question Answering
|
We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.
| false |
[] |
[] | null | null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
chen-etal-2017-leveraging
|
https://aclanthology.org/K17-1006
|
Leveraging Eventive Information for Better Metaphor Detection and Classification
|
Metaphor detection has been both challenging and rewarding in natural language processing applications. This study offers a new approach based on eventive information in detecting metaphors by leveraging the Chinese writing system, which is a culturally bound ontological system organized according to the basic concepts represented by radicals. As such, the information represented is available in all Chinese text without pre-processing. Since metaphor detection is another culturally based conceptual representation, we hypothesize that sub-textual information can facilitate the identification and classification of the types of metaphoric events denoted in Chinese text. We propose a set of syntactic conditions crucial to event structures to improve the model based on the classification of radical groups. With the proposed syntactic conditions, the model achieves a performance of 0.8859 in terms of F-scores, making 1.7% of improvement than the same classifier with only Bag-ofword features. Results show that eventive information can improve the effectiveness of metaphor detection. Event information is rooted in every language, and thus this approach has a high potential to be applied to metaphor detection in other languages.
| false |
[] |
[] | null | null | null |
The work is partially supported by the following research grants from Hong Kong Polytechnic University: 1-YW1V, 4-ZZFE and RTVU; as well as GRF grants (PolyU 15211/14E and PolyU 152006/16E).
|
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
song-etal-2010-enhanced
|
http://www.lrec-conf.org/proceedings/lrec2010/pdf/798_Paper.pdf
|
Enhanced Infrastructure for Creation and Collection of Translation Resources
|
manual translation, parallel text harvesting, acquisition of existing manual translations Chinese > English 100M + BN, BC, NW, WB manual translation, parallel text harvesting, acquisition of existing manual translations English > Chinese 250K + BN, BC, NW, WB manual translation English>Arabic 250K + BN, BC, NW, WB manual translation Bengali>English Pashto>English Punjabi>English Tagalog>English Tamil>English Thai>English Urdu>English Uzbek>English 250-500K + per language pair manual translation, parallel text harvesting NW, WB
| false |
[] |
[] | null | null | null |
This work was supported in part by the Defense Advanced Research Projects Agency, GALE Program Grant No. HR0011-06-1-0003. The content of this paper does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
|
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
delpech-saint-dizier-2008-investigating
|
http://www.lrec-conf.org/proceedings/lrec2008/pdf/20_paper.pdf
|
Investigating the Structure of Procedural Texts for Answering How-to Questions
|
This paper presents ongoing work dedicated to parsing the textual structure of procedural texts. We propose here a model for the intructional structure and criteria to identify its main components: titles, instructions, warnings and prerequisites. The main aim of this project, besides a contribution to text processing, is to be able to answer procedural questions (How-to? questions), where the answer is a well-formed portion of a text, not a small set of words as for factoid questions.
| false |
[] |
[] | null | null | null |
Acknowledgements This paper relates work realized within the French ANR project TextCoop. We thank its partners for stimulating discussions.
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
srivastava-etal-2018-identifying
|
https://aclanthology.org/W18-4412
|
Identifying Aggression and Toxicity in Comments using Capsule Network
|
Aggression and related activities like trolling, hate speech etc. involve toxic comments in various forms. These are common scenarios in today's time and websites react by shutting down their comment sections. To tackle this, an algorithmic solution is preferred to human moderation which is slow and expensive. In this paper, we propose a single model capsule network with focal loss to achieve this task which is suitable for production environment. Our model achieves competitive results over other strong baseline methods, which show its effectiveness and that focal loss exhibits significant improvement in such cases where class imbalance is a regular issue. Additionally, we show that the problem of extensive data preprocessing, data augmentation can be tackled by capsule networks implicitly. We achieve an overall ROC AUC of 98.46 on Kaggletoxic comment dataset and show that it beats other architectures by a good margin. As comments tend to be written in more than one language, and transliteration is a common problem, we further show that our model handles this effectively by applying our model on TRAC shared task dataset which contains comments in code-mixed Hindi-English.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
wich-etal-2020-investigating
|
https://aclanthology.org/2020.alw-1.22
|
Investigating Annotator Bias with a Graph-Based Approach
|
A challenge that many online platforms face is hate speech or any other form of online abuse. To cope with this, hate speech detection systems are developed based on machine learning to reduce manual work for monitoring these platforms. Unfortunately, machine learning is vulnerable to unintended bias in training data, which could have severe consequences, such as a decrease in classification performance or unfair behavior (e.g., discriminating minorities). In the scope of this study, we want to investigate annotator bias-a form of bias that annotators cause due to different knowledge in regards to the task and their subjective perception. Our goal is to identify annotation bias based on similarities in the annotation behavior from annotators. To do so, we build a graph based on the annotations from the different annotators, apply a community detection algorithm to group the annotators, and train for each group classifiers whose performances we compare. By doing so, we are able to identify annotator bias within a data set. The proposed method and collected insights can contribute to developing fairer and more reliable hate speech classification models.
| false |
[] |
[] | null | null | null |
This research has been partially funded by a scholarship from the Hanns Seidel Foundation financed by the German Federal Ministry of Education and Research.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hobbs-etal-1992-robust
|
https://aclanthology.org/A92-1026
|
Robust Processing of Real-World Natural-Language Texts
|
I1. is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our experience with TACITUS; especially in the MUC-3 evaluation, has shown that. principled techniques fox' syntactic and pragmatic analysis can be bolstered with methods for achieving robustness. We describe three techniques for making syntactic analysis more robust-an agendabased scheduling parser, a recovery technique for failed parses, and a new technique called terminal substring parsing. For pragmatics processing, we describe how the method of abductive inference is inherently robust, in that an interpretation is always possible, so that in the absence of the required world knowledge, performance degrades gracefully. Each of these techlfiques have been evaluated and the results of the evaluations are presented.
| false |
[] |
[] | null | null | null |
This research has been funded by the Defense Advanced Research Projects Agency under Office of Naval Research contracts N00014-85-C-0013 and N00014-90-C-0220.
|
1992
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
miura-etal-2014-teamx
|
https://aclanthology.org/S14-2111
|
TeamX: A Sentiment Analyzer with Enhanced Lexicon Mapping and Weighting Scheme for Unbalanced Data
|
This paper describes the system that has been used by TeamX in SemEval-2014 Task 9 Subtask B. The system is a sentiment analyzer based on a supervised text categorization approach designed with following two concepts. Firstly, since lexicon features were shown to be effective in SemEval-2013 Task 2, various lexicons and pre-processors for them are introduced to enhance lexical information. Secondly, since a distribution of sentiment on tweets is known to be unbalanced, an weighting scheme is introduced to bias an output of a machine learner. For the test run, the system was tuned towards Twitter texts and successfully achieved high scoring results on Twitter data, average F 1 70.96 on Twit-ter2014 and average F 1 56.50 on Twit-ter2014Sarcasm.
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their valuable comments to improve this paper.
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kennington-schlangen-2021-incremental
|
https://aclanthology.org/2021.mmsr-1.8
|
Incremental Unit Networks for Multimodal, Fine-grained Information State Representation
|
We offer a sketch of a fine-grained information state annotation scheme that follows directly from the Incremental Unit abstract model of dialogue processing when used within a multimodal, co-located, interactive setting. We explain the Incremental Unit model and give an example application using the Localized Narratives dataset, then offer avenues for future research.
| false |
[] |
[] | null | null | null |
Acknowledgements We appreciate the feedback from the anonymous reviewers.
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
grimm-etal-2015-towards
|
https://aclanthology.org/W15-2405
|
Towards a Model of Prediction-based Syntactic Category Acquisition: First Steps with Word Embeddings
|
We present a prototype model, based on a combination of count-based distributional semantics and prediction-based neural word embeddings, which learns about syntactic categories as a function of (1) writing contextual, phonological, and lexical-stress-related information to memory and (2) predicting upcoming context words based on memorized information. The system is a first step towards utilizing recently popular methods from Natural Language Processing for exploring the role of prediction in childrens' acquisition of syntactic categories. 1 Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781 [Computation and Language (cs.CL)]
| false |
[] |
[] | null | null | null |
The present research was supported by a BOF/TOP grant (ID 29072) of the Research Council of the University of Antwerp.
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
noro-tokuda-2008-ranking
|
https://aclanthology.org/I08-2092
|
Ranking Words for Building a Japanese Defining Vocabulary
|
Defining all words in a Japanese dictionary by using a limited number of words (defining vocabulary) is helpful for Japanese children and second-language learners of Japanese. Although some English dictionaries have their own defining vocabulary, no Japanese dictionary has such vocabulary as of yet. As the first step toward building a Japanese defining vocabulary, we ranked Japanese words based on a graphbased method. In this paper, we introduce the method, and show some evaluation results of applying the method to an existing Japanese dictionary.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kleiweg-van-noord-2020-alpinograph
|
https://aclanthology.org/2020.tlt-1.13
|
AlpinoGraph: A Graph-based Search Engine for Flexible and Efficient Treebank Search
|
AlpinoGraph is a graph-based search engine which provides treebank search using SQL database technology coupled with the Cypher query language for graphs. In the paper, we show that AlpinoGraph is a very powerful and very flexible approach towards treebank search. At the same time, AlpinoGraph is efficient. Currently, AlpinoGraph is applicable for all standard Dutch treebanks. We compare the Cypher queries in AlpinoGraph with the XPath queries used in earlier treebank search applications for the same treebanks. We also present a pre-processing technique which speeds up query processing dramatically in some cases, and is applicable beyond AlpinoGraph.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
geng-etal-2022-improving
|
https://aclanthology.org/2022.acl-long.20
|
Improving Personalized Explanation Generation through Visualization
|
In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text-image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.
| false |
[] |
[] | null | null | null |
We appreciate the valuable feedback and suggestions of the reviewers. This work was supported in part by NSF IIS 1910154, 2007907, and 2046457. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsors.
|
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
s-r-etal-2022-sentiment
|
https://aclanthology.org/2022.dravidianlangtech-1.29
|
Sentiment Analysis on Code-Switched Dravidian Languages with Kernel Based Extreme Learning Machines
|
Code-switching refers to the textual or spoken data containing multiple languages. Application of natural language processing (NLP) tasks like sentiment analysis is a harder problem on code-switched languages due to the irregularities in the sentence structuring and ordering. This paper shows the experiment results of building a Kernel based Extreme Learning Machines(ELM) for sentiment analysis for code-switched Dravidian languages with English. Our results show that ELM performs better than traditional machine learning classifiers on various metrics as well as trains faster than deep learning models. We also show that Polynomial kernels perform better than others in the ELM architecture. We were able to achieve a median AUC of 0.79 with a polynomial kernel.
| false |
[] |
[] | null | null | null | null |
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
beckley-roark-2011-asynchronous
|
https://aclanthology.org/W11-2305
|
Asynchronous fixed-grid scanning with dynamic codes
|
In this paper, we examine several methods for including dynamic, contextually-sensitive binary codes within indirect selection typing methods using a grid with fixed symbol positions. Using Huffman codes derived from a character n-gram model, we investigate both synchronous (fixed latency highlighting) and asynchronous (self-paced using long versus short press) scanning. Additionally, we look at methods that allow for scanning past a target and returning to it versus methods that remove unselected items from consideration. Finally, we investigate a novel method for displaying the binary codes for each symbol to the user, rather than using cell highlighting, as the means for identifying the required input sequence for the target symbol. We demonstrate that dynamic coding methods for fixed position grids can be tailored for very diverse user requirements.
| false |
[] |
[] | null | null | null | null |
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
muischnek-muurisep-2017-estonian
|
https://aclanthology.org/W17-0410
|
Estonian Copular and Existential Constructions as an UD Annotation Problem
|
This article is about annotating clauses with nonverbal predication in version 2 of Estonian UD treebank. Three possible annotation schemas are discussed, among which separating existential clauses from copular clauses would be theoretically most sound but would need too much manual labor and could possibly yield inconcistent annotation. Therefore, a solution has been adapted which separates existential clauses consisting only of subject and (copular) verb olema be from all other olema-clauses.
| false |
[] |
[] | null | null | null |
This study was supported by the Estonian Ministry of Education and Research (IUT20-56), and by the European Union through the European Regional Development Fund (Centre of Excellence in Estonian Studies).
|
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2020-pretrain
|
https://aclanthology.org/2020.acl-main.200
|
To Pretrain or Not to Pretrain: Examining the Benefits of Pretrainng on Resource Rich Tasks
|
Pretraining NLP models with variants of Masked Language Model (MLM) objectives has recently led to a significant improvements on many tasks. This paper examines the benefits of pretrained models as a function of the number of training samples used in the downstream task. On several text classification tasks, we show that as the number of training examples grow into the millions, the accuracy gap between finetuning BERT-based model and training vanilla LSTM from scratch narrows to within 1%. Our findings indicate that MLM-based models might reach a diminishing return point as the supervised data size increases significantly.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
navarretta-2000-semantic
|
https://aclanthology.org/W99-1013
|
Semantic Clustering of Adjectives and Verbs Based on Syntactic Patterns
|
In this paper we show that some of the syntactic patterns in an NLP lexicon can be used to identify semantically "similar" adjectives and verbs. We define semantic similarity on the basis of parameters used in the literature to classify adjectives and verbs semantically. The semantic clusters obtained from the syntactic encodings in the lexicon are evaluated by comparing them with semantic groups in existing tax onomies. The relation between adjectival syntactic patterns and their meaning is particularly interesting, because it has not been explored in the literature as much as it is the case for the relation between verbal complements and tu-guments. The identification of semantic groups on the basis of the syntactic encodings in the con sidered NLP lexicon can also be extended to other word classes and, maybe, to other languages for which the same type of lexicon exists.
| false |
[] |
[] | null | null | null | null |
2000
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wan-etal-2020-self
|
https://aclanthology.org/2020.emnlp-main.80
|
Self-Paced Learning for Neural Machine Translation
|
Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed. 1
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
smith-eisner-2006-annealing
|
https://aclanthology.org/P06-1072
|
Annealing Structural Bias in Multilingual Weighted Grammar Induction
|
We first show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward "broken" hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement. We relate this approach to contrastive estimation (Smith and Eisner, 2005a), apply the latter to grammar induction in six languages, and show that our new approach improves accuracy by 1-17% (absolute) over CE (and 8-30% over EM), achieving to our knowledge the best results on this task to date. Our method, structural annealing, is a general technique with broad applicability to hidden-structure discovery problems.
| false |
[] |
[] | null | null | null | null |
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
shen-etal-2022-parallel
|
https://aclanthology.org/2022.acl-long.67
|
Parallel Instance Query Network for Named Entity Recognition
|
Named entity recognition (NER) is a fundamental task in natural language processing. Recent works treat named entity recognition as a reading comprehension task, constructing typespecific queries manually to extract entities. This paradigm suffers from three issues. First, type-specific queries can only extract one type of entities per inference, which is inefficient. Second, the extraction for different types of entities is isolated, ignoring the dependencies between them. Third, query construction relies on external knowledge and is difficult to apply to realistic scenarios with hundreds of entity types. To deal with them, we propose Parallel Instance Query Network (PIQN), which sets up global and learnable instance queries to extract entities from a sentence in a parallel manner. Each instance query predicts one entity, and by feeding all instance queries simultaneously, we can query all entities in parallel. Instead of being constructed from external knowledge, instance queries can learn their different query semantics during training. For training the model, we treat label assignment as a one-to-many Linear Assignment Problem (LAP) and dynamically assign gold entities to instance queries with minimal assignment cost. Experiments on both nested and flat NER datasets demonstrate that our proposed method outperforms previous state-of-the-art models 1 .
| false |
[] |
[] | null | null | null | null |
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hoenen-2016-wikipedia
|
https://aclanthology.org/L16-1335
|
Wikipedia Titles As Noun Tag Predictors
|
In this paper, we investigate a covert labeling cue, namely the probability that a title (by example of the Wikipedia titles) is a noun. If this probability is very large, any list such as or comparable to the Wikipedia titles can be used as a reliable word-class (or part-of-speech tag) predictor or noun lexicon. This may be especially useful in the case of Low Resource Languages (LRL) where labeled data is lacking and putatively for Natural Language Processing (NLP) tasks such as Word Sense Disambiguation, Sentiment Analysis and Machine Translation. Profitting from the ease of digital publication on the web as opposed to print, LRL speaker communities produce resources such as Wikipedia and Wiktionary, which can be used for an assessment. We provide statistical evidence for a strong noun bias for the Wikipedia titles from 2 corpora (English, Persian) and a dictionary (Japanese) and for a typologically balanced set of 17 languages including LRLs. Additionally, we conduct a small experiment on predicting noun tags for out-of-vocabulary items in part-of-speech tagging for English.
| false |
[] |
[] | null | null | null |
We greatfully acknowledge the support arising from the collaboration of the empirical linguistics department and computer science manifesting in the Centre for the Digital Foundation of Research in the Humanities, Social, and Educational Sciences (CEDIFOR: https://www. cedifor.de/en/cedifor/).
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
johannessen-etal-2009-nordic
|
https://aclanthology.org/W09-4612
|
The Nordic Dialect Corpus--an advanced research tool
|
The paper describes the first part of the Nordic Dialect Corpus. This is a tool that combines a number of useful features that together makes it a unique and very advanced resource for researchers of many fields of language search. The corpus is web-based and features full audiovisual representation linked to transcripts. 1 Credits The Nordic Dialect Corpus is the result of close collaboration between the partners in the research networks Scandinavian Dialect Syntax and Nordic Centre of Excellence in Microcomparative Syntax. The researchers in the network have contributed in everything from decisions to actual work ranging from methodology to recordings, transcription, and annotation. Some of the corpus (in particular, recordings of informants) has been financed by the national research councils in the individual countries, while the technical development has been financed by the University of Oslo and the Norwegian Research Council, plus the Nordic research funds NOS-HS and NordForsk.
| false |
[] |
[] | null | null | null |
In addition to participants in the ScanDiaSyn and NORMS networks, we would like to thank three anonymous NODALIDA-09 reviewers for valuable comments.
|
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2018-itnlp
|
https://aclanthology.org/S18-1183
|
ITNLP-ARC at SemEval-2018 Task 12: Argument Reasoning Comprehension with Attention
|
Reasoning is a very important topic and has many important applications in the field of natural language processing. Semantic Evaluation (SemEval) 2018 Task 12 "The Argument Reasoning Comprehension" committed to research natural language reasoning. In this task, we proposed a novel argument reasoning comprehension system, ITNLP-ARC, which use Neural Networks technology to solve this problem. In our system, the LSTM model is involved to encode both the premise sentences and the warrant sentences. The attention model is used to merge the two premise sentence vectors. Through comparing the similarity between the attention vector and each of the two warrant vectors, we choose the one with higher similarity as our system's final answer.
| false |
[] |
[] | null | null | null |
This work is sponsored by the National High Technology Research and Development Program of China (2015AA015405) and National Natural Science Foundation of China (61572151 and 61602131).
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
araki-etal-1994-evaluation-detect
|
https://aclanthology.org/C94-1030
|
An Evaluation to Detect and Correct Erroneous Characters Wrongly Substituted, Deleted and Inserted in Japanese and English Sentences Using Markov Models
|
In optical character recognition and coni.inuous speech recognition of a natural language, it has been diflicult to detect error characters which are wrongly deleted and inserted. ]n <>rder to judge three types of the errors, which are characters wrongly substituted, deleted or inserted in a Japanese "bunsetsu" and an l';nglish word, and to correct these errors, this paper proposes new methods using rn-th order Markov chain model for Japanese "l~anjikana" characters and Fmglish alphabets, assuming that Markov l)robability of a correct chain of syllables or "kanji-kana" characters is greater than that of erroneous chains. From the results of the experiments, it is concluded that the methods is usefld for detecting as well as correcting these errors in Japanese "bunsetsu" and English words.
| false |
[] |
[] | null | null | null | null |
1994
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
callaway-2008-textcap
|
https://aclanthology.org/W08-2226
|
The TextCap Semantic Interpreter
|
The lack of large amounts of readily available, explicitly represented knowledge has long been recognized as a barrier to applications requiring semantic knowledge such as machine translation and question answering. This problem is analogous to that facing machine translation decades ago, where one proposed solution was to use human translators to post-edit automatically produced, low quality translations rather than expect a computer to independently create high-quality translations. This paper describes an attempt at implementing a semantic parser that takes unrestricted English text, uses publically available computational linguistics tools and lexical resources and as output produces semantic triples which can be used in a variety of tasks such as generating knowledge bases, providing raw material for question answering systems, or creating RDF structures. We describe the TEXTCAP system, detail the semantic triple representation it produces, illustrate step by step how TEXTCAP processes a short text, and use its results on unseen texts to discuss the amount of post-editing that might be realistically required.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
xu-etal-2003-training
|
https://aclanthology.org/W03-1021
|
Training Connectionist Models for the Structured Language Model
|
We investigate the performance of the Structured Language Model (SLM) in terms of perplexity (PPL) when its components are modeled by connectionist models. The connectionist models use a distributed representation of the items in the history and make much better use of contexts than currently used interpolated or back-off models, not only because of the inherent capability of the connectionist model in fighting the data sparseness problem, but also because of the sublinear growth in the model size when the context length is increased. The connectionist models can be further trained by an EM procedure, similar to the previously used procedure for training the SLM. Our experiments show that the connectionist models can significantly improve the PPL over the interpolated and back-off models on the UPENN Treebank corpora, after interpolating with a baseline trigram language model. The EM training procedure can improve the connectionist models further, by using hidden events obtained by the SLM parser.
| false |
[] |
[] | null | null | null | null |
2003
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
terragni-etal-2020-matters
|
https://aclanthology.org/2020.insights-1.5
|
Which Matters Most? Comparing the Impact of Concept and Document Relationships in Topic Models
|
Topic models have been widely used to discover hidden topics in a collection of documents. In this paper, we propose to investigate the role of two different types of relational information, i.e. document relationships and concept relationships. While exploiting the document network significantly improves topic coherence, the introduction of concepts and their relationships does not influence the results both quantitatively and qualitatively.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
charniak-1978-spoon-hand
|
https://aclanthology.org/T78-1027
|
With a Spoon in Hand This Must Be the Eating Frame
|
A language comprehension program using "frames", "scripts", etc. must be able to decide which frames are appropriate to the text. Often there will be explicit indication ("Fred was playing tennis" suggests the TENNIS frame) but it is not always so easy.("The woman waved while the man on the stage sawed her in half" suggests MAGICIAN but how?) This paper will examine how a program might go about determining the appropriate frame in such cases. At a sufficiently vague level the model presented here will resemble that of Minsky (1975) in it's assumption that one usually has available one or more context frames. Hence one only needs worry if information comes in which does not fit them. As opposed to Minsky however the suggestions for new context frames will not come from the old onesi but rather from the conflicting information. The problem them becomes how potential frames are indexed under the information which "suggests" them.
| false |
[] |
[] | null | null | null |
I have benefited from conversations with J.
|
1978
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
poignant-etal-2016-camomile
|
https://aclanthology.org/L16-1226
|
The CAMOMILE Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents
|
In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source.
| false |
[] |
[] | null | null | null |
We thank the members of the CAMOMILE international advisory committee for their time and their precious advices and proposals. This work was done in the context of the CHIST-ERA CAMOMILE project funded by the ANR (Agence Nationale de la Recherche, France) under grant ANR-12-CHRI-0006-01, the FNR (Fonds National de La Recherche, Luxembourg), Tübitak (scientific and technological research council of Turkey) and Mineco (Ministerio de Economía y Competitividad, Spain).
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hwang-lee-2021-semi
|
https://aclanthology.org/2021.ranlp-1.67
|
Semi-Supervised Learning Based on Auto-generated Lexicon Using XAI in Sentiment Analysis
|
In this study, we proposed a novel Lexicon-based pseudo-labeling method utilizing explainable AI(XAI) approach. Existing approach have a fundamental limitation in their robustness because poor classifier leads to inaccurate soft-labeling, and it lead to poor classifier repetitively. Meanwhile, we generate the lexicon consists of sentiment word based on the explainability score. Then we calculate the confidence of unlabeled data with lexicon and add them into labeled dataset for the robust pseudo-labeling approach. Our proposed method has three contributions. First, the proposed methodology automatically generates a lexicon based on XAI and performs independent pseudolabeling, thereby guaranteeing higher performance and robustness compared to the existing one. Second, since lexiconbased pseudo-labeling is performed without re-learning in most of models, time efficiency is considerably increased, and third, the generated high-quality lexicon can be available for sentiment analysis of data from similar domains. The effectiveness and efficiency of our proposed method were verified through quantitative comparison with the existing pseudo-labeling method and qualitative review of the generated lexicon.
| false |
[] |
[] | null | null | null | null |
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
van-halteren-oostdijk-2018-identification
|
https://aclanthology.org/W18-3923
|
Identification of Differences between Dutch Language Varieties with the VarDial2018 Dutch-Flemish Subtitle Data
|
With the goal of discovering differences between Belgian and Netherlandic Dutch, we participated as Team Taurus in the Dutch-Flemish Subtitles task of VarDial2018. We used a rather simple marker-based method, but with a wide range of features, including lexical, lexico-syntactic and syntactic ones, and achieved a second position in the ranking. Inspection of highly distinguishing features did point towards differences between the two language varieties, but because of the nature of the experimental data, we have to treat our observations as very tentative and in need of further investigation.
| false |
[] |
[] | null | null | null |
We thank Erwin Komen and Micha Hulsbosch for preparing a script for the analysis of the text with Frog, Alpino and the surfacing software.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
brew-schulte-im-walde-2002-spectral
|
https://aclanthology.org/W02-1016
|
Spectral Clustering for German Verbs
|
We describe and evaluate the application of a spectral clustering technique (Ng et al., 2002) to the unsupervised clustering of German verbs. Our previous work has shown that standard clustering techniques succeed in inducing Levinstyle semantic classes from verb subcategorisation information. But clustering in the very high dimensional spaces that we use is fraught with technical and conceptual difficulties. Spectral clustering performs a dimensionality reduction on the verb frame patterns, and provides a robustness and efficiency that standard clustering methods do not display in direct use. The clustering results are evaluated according to the alignment (Christianini et al., 2002) between the Gram matrix defined by the cluster output and the corresponding matrix defined by a gold standard.
| false |
[] |
[] | null | null | null | null |
2002
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lim-liew-2022-english
|
https://aclanthology.org/2022.acl-srw.16
|
English-Malay Cross-Lingual Embedding Alignment using Bilingual Lexicon Augmentation
|
As high-quality Malay language resources are still a scarcity, cross lingual word embeddings make it possible for richer English resources to be leveraged for downstream Malay text classification tasks. This paper focuses on creating an English-Malay cross-lingual word embeddings using embedding alignment by exploiting existing language resources. We augmented the training bilingual lexicons using machine translation with the goal to improve the alignment precision of our cross-lingual word embeddings. We investigated the quality of the current stateof-the-art English-Malay bilingual lexicon and worked on improving its quality using Google Translate. We also examined the effect of Malay word coverage on the quality of cross-lingual word embeddings. Experimental results with a precision up till 28.17% show that the alignment precision of the cross-lingual word embeddings would inevitably degrade after 1-NN but a better seed lexicon and cleaner nearest neighbours can reduce the number of word pairs required to achieve satisfactory performance. As the English and Malay monolingual embeddings are pre-trained on informal language corpora, our proposed English-Malay embeddings alignment approach is also able to map non-standard Malay translations in the English nearest neighbours.
| false |
[] |
[] | null | null | null |
This study was supported by the Ministry of Higher Education Malaysia for Fundamental Research Grant Scheme with Project Code: FRGS/1/2020/ICT02/USM/02/3.
|
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lockard-etal-2020-zeroshotceres
|
https://aclanthology.org/2020.acl-main.721
|
ZeroShotCeres: Zero-Shot Relation Extraction from Semi-Structured Webpages
|
In many documents, such as semi-structured webpages, textual semantics are augmented with additional information conveyed using visual elements including layout, font size, and color. Prior work on information extraction from semi-structured websites has required learning an extraction model specific to a given template via either manually labeled or distantly supervised data from that template. In this work, we propose a solution for "zero-shot" open-domain relation extraction from webpages with a previously unseen template, including from websites with little overlap with existing sources of knowledge for distant supervision and websites in entirely new subject verticals. Our model uses a graph neural network-based approach to build a rich representation of text fields on a webpage and the relationships between them, enabling generalization to new templates. Experiments show this approach provides a 31% F1 gain over a baseline for zero-shot extraction in a new subject vertical.
| false |
[] |
[] | null | null | null |
We would like to acknowledge grants from ONR N00014-18-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), Allen Distinguished Investigator Award, and Sloan Fellowship.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
och-etal-2001-efficient
|
https://aclanthology.org/W01-1408
|
An Efficient A* Search Algorithm for Statistical Machine Translation
|
In this paper, we describe an efficient A* search algorithm for statistical machine translation. In contrary to beamsearch or greedy approaches it is possible to guarantee the avoidance of search errors with A*. We develop various sophisticated admissible and almost admissible heuristic functions. Especially our newly developped method to perform a multi-pass A* search with an iteratively improved heuristic function allows us to translate even long sentences. We compare the A* search algorithm with a beam-search approach on the Hansards task.
| false |
[] |
[] | null | null | null |
This paper is based on work supported partly by the VERBMOBIL project (contract number 01 IV 701 T4) by the German Federal Ministry of Education, Science, Research and Technology. In addition, this work was supported by the National Science Foundation under Grant No. IIS-9820687 through the 1999 Workshop on Language Engineering, Center for Language and Speech Processing, Johns Hopkins University.
|
2001
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nakano-kato-1998-cue
|
https://aclanthology.org/W98-0317
|
Cue Phrase Selection in Instruction Dialogue Using Machine Learning
|
The purpose of this paper is to identify effective factors for selecting discourse organization cue phrases in instruction dialogue that signal changes in discourse structure such as topic shifts and attentional state changes. By using a machine learning technique, a variety of features concerning discourse structure, task structure, and dialogue context are examined in terms of their effectiveness and the best set of learning <features is identified. Our result reveals that, in addition to discourse structure, already identified in previous studies, task structure and dialogue context play an important role. Moreover, an evaluation using a large dialogue corpus shows the utihty of applying machine learning techniques to cue phrase selection.
| false |
[] |
[] | null | null | null | null |
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
soldner-etal-2019-box
|
https://aclanthology.org/N19-1175
|
Box of Lies: Multimodal Deception Detection in Dialogues
|
Deception often takes place during everyday conversations, yet conversational dialogues remain largely unexplored by current work on automatic deception detection. In this paper, we address the task of detecting multimodal deceptive cues during conversational dialogues. We introduce a multimodal dataset containing deceptive conversations between participants playing The Tonight Show Starring Jimmy Fallon R Box of Lies game, in which they try to guess whether an object description provided by their opponent is deceptive or not. We conduct annotations of multimodal communication behaviors, including facial and linguistic behaviors, and derive several learning features based on these annotations. Initial classification experiments show promising results, performing well above both a random and a human baseline, and reaching up to 69% accuracy in distinguishing deceptive and truthful behaviors.
| true |
[] |
[] |
Peace, Justice and Strong Institutions
| null | null |
This material is based in part upon work supported by the Michigan Institute for Data Science, by the National Science Foundation (grant #1815291), and by the John Templeton Foundation (grant #61156). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the Michigan Institute for Data Science, the National Science Foundation, or the John Templeton Foundation.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false |
spangher-etal-2020-enabling
|
https://aclanthology.org/2020.nlpcovid19-acl.4
|
Enabling Low-Resource Transfer Learning across COVID-19 Corpora by Combining Event-Extraction and Co-Training
|
Social-science investigations can benefit from a direct comparison of heterogenous corpora: in this work, we compare U.S. state-level COVID-19 policy announcements with policy discussions on Twitter. To perform this task, we require classifiers with high transfer accuracy to both (1) classify policy announcements and (2) classify tweets. We find that cotraining using event-extraction views significantly improves the transfer accuracy of our RoBERTa classifier by 3% above a RoBERTa baseline and 11% above other baselines. The same improvements are not observed for baseline views. With a set of 576 COVID-19 policy announcements, hand-labeled into 1 of 6 categories, our classifier observes a maximum transfer accuracy of .77 f1-score on a handvalidated set of tweets. This work represents the first known application of these techniques to an NLP transfer learning task and facilitates cross-corpora comparisons necessary for studies of social science phenomena.
| true |
[] |
[] |
Good Health and Well-Being
| null | null | null |
2020
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
csomai-mihalcea-2008-linguistically
|
https://aclanthology.org/P08-1106
|
Linguistically Motivated Features for Enhanced Back-of-the-Book Indexing
|
In this paper we present a supervised method for back-of-the-book index construction. We introduce a novel set of features that goes beyond the typical frequency-based analysis, including features based on discourse comprehension, syntactic patterns, and information drawn from an online encyclopedia. In experiments carried out on a book collection, the method was found to lead to an improvement of roughly 140% as compared to an existing state-of-the-art supervised method.
| false |
[] |
[] | null | null | null |
We are grateful to Kirk Hastings from the California Digital Library for his help in obtaining the UC Press corpus. This research has been partially supported by a grant from Google Inc. and a grant from the Texas Advanced Research Program (#003594).
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
frumkina-etal-1973-computational
|
https://aclanthology.org/C73-1013
|
Computational Methods in the Analysis of Verbal Behaviour
|
COMPUTATIONAL METHODS IN THE ANALYSIS OF VERBAL BEHAVIOUR The paper attempts to contribute to the development of the mathematical models of verbal behaviour by demonstrating the use of multidimensional individual scaling methods for differential representation of verbal perceptual structures. The method is illustrated with data on computational analysis of perception of letters of Russian alphabet. The main concern of computational linguistics (at least, of its theoretically oriented branch) has been the automatic analysis or generation of written texts. " Computational linguistic analysis " has thus become a tool for the validation of linguistic methods and theories. Another branch of computational linguistics-a more empirical and more practically-oriented one-has been largely restricted to data processing and information-retrieval problems. It is safe to say that the full extent of the potential influence of the computational approach upon the study of language functioning, that is of speech perception and verbal behaviour has not been generally recognized. Unlike psychologists, linguists have been rather slow to adopt computers for the development and evaluation of mathematical models of verbal behaviour. The present paper attempts to contribute to the development of such models by demonstrating the use of some computational approaches for differential representation of verbal perceptual structures. Speech perception may in a broad sense be defined as that part of the communication process taking place within the receiver. We should try to reach a more detailed view of the speech perception mechanism, that is to suggest some "white box" instead of the "black box" The common methodological premise for a model which accounts for various aspects of the speech perception problem is a representation of any speech unit (a sound, a syllable, a word etc.) as a point in a multidimensional perceptual space with the perceived difference between
| false |
[] |
[] | null | null | null | null |
1973
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
agarwal-1997-towards
|
https://aclanthology.org/W97-0617
|
Towards a PURE Spoken Dialogue System for Information Access
|
With the rapid explosion of the World Wide Web, it is becoming increasingly possible to easily acquire a wide variety of information such as flight schedules, yellow pages, used car prices, current stock prices, entertainment event schedules, account balances, etc. It would be very useful to have spoken dialogue interfaces for such information access tasks. We identify portability, usability, robustness, and extensibility as the four primary design objectives for such systems. In other words, the objective is to develop a PURE (Portable, Usable, Robust, Extensible) system. A two-layered dialogue architecture for spoken dialogue systems is presented where the upper layer is domainindependent and the lower layer is domainspecific. We are implementing this architecture in a mixed-initiative system that accesses flight arrival/departure information from the World Wide Web.
| false |
[] |
[] | null | null | null |
The author wishes to thank Jack Godfrey for several useful discussions and his comments on an earlier draft of this paper; Charles HemphiU for his comments and for developing and providing the DAG-GER speech recognizer; and the anonymous reviewers for their valuable suggestions that helped improve the final version of this paper.
|
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
pajkossy-zseder-2016-hunvec
|
https://aclanthology.org/L16-1678
|
The hunvec framework for NN-CRF-based sequential tagging
|
In this work we present the open source hunvec framework for sequential tagging, built upon Theano and Pylearn2. The underlying statistical model, which connects linear CRF-s with neural networks, was used by Collobert and co-workers, and several other researchers. For demonstrating the flexibility of our tool, we describe a set of experiments on part-of-speech and named-entityrecognition tasks, using English and Hungarian datasets, where we modify both model and training parameters, and illustrate the usage of custom features. Model parameters we experiment with affect the vectorial word representations used by the model; we apply different word vector initializations, defined by Word2vec and GloVe embeddings and enrich the representation of words by vectors assigned trigram features. We extend training methods by using their regularized (l2 and dropout) version. When testing our framework on a Hungarian named entity corpus, we find that its performance reaches the best published results on this dataset, with no need for language-specific feature engineering.
| false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liao-etal-2017-ynu
|
https://aclanthology.org/I17-4011
|
YNU-HPCC at IJCNLP-2017 Task 1: Chinese Grammatical Error Diagnosis Using a Bi-directional LSTM-CRF Model
|
Building a system to detect Chinese grammatical errors is a challenge for naturallanguage processing researchers. As Chinese learners are increasing, developing such a system can help them study Chinese more easily. This paper introduces a bidirectional long short-term memory (BiL-STM)-conditional random field (CRF) model to produce the sequences that indicate an error type for every position of a sentence, since we regard Chinese grammatical error diagnosis (CGED) as a sequence-labeling problem. Among the participants this year of CGED shard task, our model ranked third in the detectionlevel and identification-level results. In the position-level, our results ranked second among the participants.
| false |
[] |
[] | null | null | null | null |
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.