ID
stringlengths 11
54
| url
stringlengths 33
64
| title
stringlengths 11
184
| abstract
stringlengths 17
3.87k
⌀ | label_nlp4sg
bool 2
classes | task
list | method
list | goal1
stringclasses 9
values | goal2
stringclasses 9
values | goal3
stringclasses 1
value | acknowledgments
stringlengths 28
1.28k
⌀ | year
stringlengths 4
4
| sdg1
bool 1
class | sdg2
bool 1
class | sdg3
bool 2
classes | sdg4
bool 2
classes | sdg5
bool 2
classes | sdg6
bool 1
class | sdg7
bool 1
class | sdg8
bool 2
classes | sdg9
bool 2
classes | sdg10
bool 2
classes | sdg11
bool 2
classes | sdg12
bool 1
class | sdg13
bool 2
classes | sdg14
bool 1
class | sdg15
bool 1
class | sdg16
bool 2
classes | sdg17
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ekbal-etal-2008-named
|
https://aclanthology.org/I08-2077
|
Named Entity Recognition in Bengali: A Conditional Random Field Approach
|
This paper reports about the development of a Named Entity Recognition (NER) system for Bengali using the statistical Conditional Random Fields (CRFs). The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the various named entity (NE) classes. A portion of the partially NE tagged Bengali news corpus, developed from the archive of a leading Bengali newspaper available in the web, has been used to develop the system. The training set consists of 150K words and has been manually annotated with a NE tagset of seventeen tags. Experimental results of the 10-fold cross validation test show the effectiveness of the proposed CRF based NER system with an overall average Recall, Precision and F-Score values of 93.8%, 87.8% and 90.7%, respectively.
| false |
[] |
[] | null | null | null | null |
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
goodman-1997-global
|
https://aclanthology.org/W97-0302
|
Global Thresholding and Multiple-Pass Parsing
|
We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.
| false |
[] |
[] | null | null | null | null |
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bird-2020-sparse
|
https://aclanthology.org/2020.cl-4.1
|
Sparse Transcription
|
The transcription bottleneck is often cited as a major obstacle for efforts to document the world's endangered languages and supply them with language technologies. One solution is to extend methods from automatic speech recognition and machine translation, and recruit linguists to provide narrow phonetic transcriptions and sentence-aligned translations. However, I believe that these approaches are not a good fit with the available data and skills, or with long-established practices that are essentially word-based. In seeking a more effective approach, I consider a century of transcription practice and a wide range of computational approaches, before proposing a computational model based on spoken term detection that I call "sparse transcription." This represents a shift away from current assumptions that we transcribe phones, transcribe fully, and transcribe first. Instead, sparse transcription combines the older practice of word-level transcription with interpretive, iterative, and interactive processes that are amenable to wider participation and that open the way to new methods for processing oral languages.
| false |
[] |
[] | null | null | null |
I am indebted to the Bininj people of the Kuwarddewardde "Stone Country" in Northern Australia for the opportunity to live and work in their community, where I gained many insights in the course of learning to transcribe Kunwinjku. Thanks to Steve Abney, Laurent Besacier, Mark Liberman, Maïa Ponsonnet, to my colleagues and students in the Top End Language Lab at Charles Darwin University, and to several anonymous reviewers for thoughtful feedback. This research has been supported by a grant from the Australian Research
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schaler-2004-certified
|
https://aclanthology.org/2004.tc-1.15
|
The Certified Localisation Professional (CLP)
|
The Institute of Localisation Professionals (TILP) was established in 2002 as a non-profit organisation and in 2003 merged with the US-based Professional Association for Localization (PAL). TILP's objective is to develop professional practices in localisation globally. TILP is owned by its individual members. It coordinates a number of regional chapters in Europe, North America, Latin America and Asia. The Certified Localisation Professional Programme (CLP) was launched by TILP in September 2004 and provides professional certification to individuals working in a variety of professions in localisation, among them project managers, engineers, testers, internationalisation specialists, and linguists. This article will outline the CLP programme and is aimed at course providers interested in offering TILP accredited courses, employers planning to make CLP certification a requirement for future employees, and individual professionals planning to develop their professional career.
| false |
[] |
[] | null | null | null |
The support received by the European Union's ADAPT Programme for the development of the initial CLP project (A-1997-Irl-551) is acknowledged. This project was coordinated by the LRC. The project partners were: FÁS (Irish National Training Agency), CATT (Siemens Nixdorf Training Centre) and TELSI Ireland, supported by a large number of stakeholders.The author also would like to acknowledge the support of Siobhan King-Hughes in the preparation of the first CLP certification outline, partially reproduced in this article.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lalor-etal-2019-learning
|
https://aclanthology.org/D19-1434
|
Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds
|
Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.
| false |
[] |
[] | null | null | null |
We thank the anonymous reviewers for their comments and suggestions. This work was supported in part by the HSR&D award IIR 1I01HX001457 from the United States Department of Veterans Affairs (VA). We also acknowledge the support of LM012817 from the National Institutes of Health. This work was also supported in part by the Center for Intelligent Information Retrieval. The contents of this paper do not represent the views of CIIR, NIH, VA, or the United States Government.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
dunning-1993-accurate
|
https://aclanthology.org/J93-1003
|
Accurate Methods for the Statistics of Surprise and Coincidence
|
Much work has been done on the statistical analysis of text. In some cases reported in the literature, inappropriate statistical methods have been used, and statistical significance of results have not been addressed. In particular, asymptotic normality assumptions have often been used unjustifiably, leading to flawed results. This assumption of normal distribution limits the ability to analyze rare events. Unfortunately rare events do make up a large fraction of real text. However, more applicable methods based on likelihood ratio tests are available that yield good results with relatively small samples. These tests can be implemented efficiently, and have been used for the detection of composite terms and for the determination of domain-specific terms. In some cases, these measures perform much better than the methods previously used. In cases where traditional contingency table methods work well, the likelihood ratio tests described here are nearly identical. This paper describes the basis of a measure based on likelihood ratios that can be applied to the analysis of text.
| false |
[] |
[] | null | null | null | null |
1993
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
li-jurafsky-2015-multi
|
https://aclanthology.org/D15-1200
|
Do Multi-Sense Embeddings Improve Natural Language Understanding?
|
Learning a distinct representation for each sense of an ambiguous word could lead to more powerful and fine-grained models of vector-space representations. Yet while 'multi-sense' methods have been proposed and tested on artificial wordsimilarity tasks, we don't know if they improve real natural language understanding tasks. In this paper we introduce a multisense embedding model based on Chinese Restaurant Processes that achieves state of the art performance on matching human word similarity judgments, and propose a pipelined architecture for incorporating multi-sense embeddings into language understanding. We then test the performance of our model on part-of-speech tagging, named entity recognition, sentiment analysis, semantic relation identification and semantic relatedness, controlling for embedding dimensionality. We find that multi-sense embeddings do improve performance on some tasks (part-of-speech tagging, semantic relation identification, semantic relatedness) but not on others (named entity recognition, various forms of sentiment analysis). We discuss how these differences may be caused by the different role of word sense information in each of the tasks. The results highlight the importance of testing embedding models in real applications.
| false |
[] |
[] | null | null | null |
We would like to thank Sam Bowman, Ignacio Cases, Kevin Gu, Gabor Angeli, Sida Wang, Percy Liang and other members of the Stanford NLP group, as well as anonymous reviewers for their helpful advice on various aspects of this work. We gratefully acknowledge the support of the NSF via award IIS-1514268, the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF, DARPA, AFRL, or the US government.
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
walinska-potoniec-2020-urszula
|
https://aclanthology.org/2020.semeval-1.161
|
Urszula Wali\'nska at SemEval-2020 Task 8: Fusion of Text and Image Features Using LSTM and VGG16 for Memotion Analysis
|
In the paper, we describe the Urszula Walińska's entry to the SemEval-2020 Task 8: Memotion Analysis. The sentiment analysis of memes task, is motivated by a pervasive problem of offensive content spread in social media up to the present time. In fact, memes are an important medium of expressing opinion and emotions, therefore they can be hateful at many times. In order to identify emotions expressed by memes we construct a tool based on neural networks and deep learning methods. It takes an advantage of a multi-modal nature of the task and performs fusion of image and text features extracted by models dedicated to this task. Our solution achieved 0.346 macro F1-score in Task A-Sentiment Classification, which brought us to the 7th place in the official rank of the competition.
| false |
[] |
[] | null | null | null |
Urszula Walińska executed the research as a part of master thesis project under the supervision of Jedrzej Potoniec. This work was partially funded by project 0311/SBAD/0678.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rudinger-etal-2018-neural
|
https://aclanthology.org/D18-1114
|
Neural-Davidsonian Semantic Proto-role Labeling
|
We present a model for semantic proto-role labeling (SPRL) using an adapted bidirectional LSTM encoding strategy that we call Neural-Davidsonian: predicate-argument structure is represented as pairs of hidden states corresponding to predicate and argument head tokens of the input sequence. We demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network naturally shares parameters between attributes, allowing for learning new attribute types with limited added supervision.
| false |
[] |
[] | null | null | null |
This research was supported by the JHU HLT-COE, DARPA AIDA, and NSF GRFP (Grant No. DGE-1232825). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA, NSF, or the U.S. Government.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sheikh-etal-2016-diachronic
|
https://aclanthology.org/L16-1609
|
How Diachronic Text Corpora Affect Context based Retrieval of OOV Proper Names for Audio News
|
Out-Of-Vocabulary (OOV) words missed by Large Vocabulary Continuous Speech Recognition (LVCSR) systems can be recovered with the help of topic and semantic context of the OOV words captured from a diachronic text corpus. In this paper we investigate how the choice of documents for the diachronic text corpora affects the retrieval of OOV Proper Names (PNs) relevant to an audio document. We first present our diachronic French broadcast news datasets, which highlight the motivation of our study on OOV PNs. Then the effect of using diachronic text data from different sources and a different time span is analysed. With OOV PN retrieval experiments on French broadcast news videos, we conclude that a diachronic corpus with text from different sources leads to better retrieval performance than one relying on text from single source or from a longer time span.
| false |
[] |
[] | null | null | null |
This work is funded by the ContNomina project supported by the French National Research Agency (ANR) under the contract ANR-12-BS02-0009.
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tripodi-etal-2019-tracing
|
https://aclanthology.org/W19-4715
|
Tracing Antisemitic Language Through Diachronic Embedding Projections: France 1789-1914
|
We investigate some aspects of the history of antisemitism in France, one of the cradles of modern antisemitism, using diachronic word embeddings. We constructed a large corpus of French books and periodicals issues that contain a keyword related to Jews and performed a diachronic word embedding over the 1789-1914 period. We studied the changes over time in the semantic spaces of 4 target words and performed embedding projections over 6 streams of antisemitic discourse. This allowed us to track the evolution of antisemitic bias in the religious, economic, socio-politic, racial, ethic and conspiratorial domains. Projections show a trend of growing antisemitism, especially in the years starting in the mid-80s and culminating in the Dreyfus affair. Our analysis also allows us to highlight the peculiar adverse bias towards Judaism in the broader context of other religions.
| false |
[] |
[] | null | null | null |
The authors of this work have received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 732942. The experiments have been run on the SCSCF cluster of Ca' Foscari University.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
popat-etal-2013-haves
|
https://aclanthology.org/P13-1041
|
The Haves and the Have-Nots: Leveraging Unlabelled Corpora for Sentiment Analysis
|
Expensive feature engineering based on WordNet senses has been shown to be useful for document level sentiment classification. A plausible reason for such a performance improvement is the reduction in data sparsity. However, such a reduction could be achieved with a lesser effort through the means of syntagma based word clustering. In this paper, the problem of data sparsity in sentiment analysis, both monolingual and cross-lingual, is addressed through the means of clustering. Experiments show that cluster based data sparsity reduction leads to performance better than sense based classification for sentiment analysis at document level. Similar idea is applied to Cross Lingual Sentiment Analysis (CLSA), and it is shown that reduction in data sparsity (after translation or bilingual-mapping) produces accuracy higher than Machine Translation based CLSA and sense based CLSA.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
aji-etal-2020-neural
|
https://aclanthology.org/2020.acl-main.688
|
In Neural Machine Translation, What Does Transfer Learning Transfer?
|
Transfer learning improves quality for lowresource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.
| false |
[] |
[] | null | null | null |
This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http: //www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). Alham Fikri Aji is funded by the Indonesia Endowment Fund for Education scholarship scheme. Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727).
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
abed-reiter-2020-arabic
|
https://aclanthology.org/2020.inlg-1.2
|
Arabic NLG Language Functions
|
The Arabic language has very limited supports from NLG researchers. In this paper, we explain the challenges of the core grammar, provide a lexical resource, and implement the first language functions for the Arabic language. We did a human evaluation to evaluate our functions in generating sentences from the NADA Corpus.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
musha-1986-new
|
https://aclanthology.org/C86-1111
|
A New Predictive Analyzer of English
|
Aspects of syntactic predictions made during the recognition of English sentences are investigated. We reinforce Kuno's original predictive analyzer[i] by introducing five types of predictions. For each type of prediction, we discuss and present its necessity, its description method, and recognition mechanism. We make use of three kinds of stacks whose behavior is specified by grammar rules in an extended version of Greibach normal form. We also investigate other factors that affect the predictive recognition process, i.e., preferences among syntactic ambiguities and necessary amount of lookahead. These factors as well as the proposed handling mechanisms of predictions are tested by analyzing two kinds of articles. In our experiment, more than seventy percent of sentences are recognized and looking two words ahead seems to be the critical length for the predictive recognition.
| false |
[] |
[] | null | null | null |
I would especially like to thank my adviser, Prof. A. Yonezawa of Tokyo Institute of Technology, for his valuable comments on this researdl and encouragement. I also thank the members of Yonezawa Lab. for their comments on my research. I also give my special thanks to the managers of Resource Sharing Company who allowed me to use their valuable dictionary for my research.
|
1986
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
holderness-etal-2018-analysis
|
https://aclanthology.org/W18-5615
|
Analysis of Risk Factor Domains in Psychosis Patient Health Records
|
Readmission after discharge from a hospital is disruptive and costly, regardless of the reason. However, it can be particularly problematic for psychiatric patients, so predicting which patients may be readmitted is critically important but also very difficult. Clinical narratives in psychiatric electronic health records (EHRs) span a wide range of topics and vocabulary; therefore, a psychiatric readmission prediction model must begin with a robust and interpretable topic extraction component. We created a data pipeline for using document vector similarity metrics to perform topic extraction on psychiatric EHR data in service of our long-term goal of creating a readmission risk classifier. We show initial results for our topic extraction model and identify additional features we will be incorporating in the future.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
This work was supported by a grant from the National Institute of Mental Health (grant no. 5R01MH109687 to Mei-Hua Hall). We would also like to thank the LOUHI 2018 Workshop reviewers for their constructive and helpful comments.
|
2018
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mieskes-2017-quantitative
|
https://aclanthology.org/W17-1603
|
A Quantitative Study of Data in the NLP community
|
We present results on a quantitative analysis of publications in the NLP domain on collecting, publishing and availability of research data. We find that a wide range of publications rely on data crawled from the web, but few give details on how potentially sensitive data was treated. Additionally, we find that while links to repositories of data are given, they often do not work even a short time after publication. We put together several suggestions on how to improve this situation based on publications from the NLP domain, but also other research areas.
| false |
[] |
[] | null | null | null |
This work was partially supported by the DFGfunded research training group "Adaptive Preparation of Information from Heterogeneous Sources" (AIPHES, GRK 1994/1). We would like to thank the reviewers for their valuable comments that helped to considerably improve the paper.
|
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
oepen-etal-2016-opt
|
https://aclanthology.org/K16-2002
|
OPT: Oslo--Potsdam--Teesside. Pipelining Rules, Rankers, and Classifier Ensembles for Shallow Discourse Parsing
|
The OPT submission to the Shared Task of the 2016 Conference on Natural Language Learning (CoNLL) implements a 'classic' pipeline architecture, combining binary classification of (candidate) explicit connectives, heuristic rules for non-explicit discourse relations, ranking and 'editing' of syntactic constituents for argument identification, and an ensemble of classifiers to assign discourse senses. With an end-toend performance of 27.77 F 1 on the English 'blind' test data, our system advances the previous state of the art (Wang & Lan, 2015) by close to four F 1 points, with particularly good results for the argument identification sub-tasks.
| false |
[] |
[] | null | null | null |
We are indebted to Te Rutherford of Brandeis University for his effort in preparing data and infrastructure for the Task, as well as for shepherding our team and everyone else through its various stages. We are grateful to two anonymous reviewers for comments on an earlier version of this manuscript.
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sikos-pado-2018-using
|
https://aclanthology.org/W18-3813
|
Using Embeddings to Compare FrameNet Frames Across Languages
|
Much of the recent interest in Frame Semantics is fueled by the substantial extent of its applicability across languages. At the same time, lexicographic studies have found that the applicability of individual frames can be diminished by cross-lingual divergences regarding polysemy, syntactic valency, and lexicalization. Due to the large effort involved in manual investigations, there are so far no broad-coverage resources with "problematic" frames for any language pair. Our study investigates to what extent multilingual vector representations of frames learned from manually annotated corpora can address this need by serving as a wide coverage source for such divergences. We present a case study for the language pair English-German using the FrameNet and SALSA corpora and find that inferences can be made about cross-lingual frame applicability using a vector space model.
| false |
[] |
[] | null | null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lee-etal-2005-zero
|
https://aclanthology.org/I05-1052
|
Why Is Zero Marking Important in Korean?
|
This paper argues for the necessity of zero pronoun annotations in Korean treebanks and provides an annotation scheme that can be used to develop a gold standard for testing different anaphor resolution algorithms. Relevant issues of pronoun annotation will be discussed by comparing the Penn Korean Treebank with zero pronoun markup and the newly developing Sejong Teebank without zero pronoun markup. In addition to supportive evidence for zero marking, necessary morphosyntactic and semantic features will be suggested for zero annotation in Korean treebanks.
| false |
[] |
[] | null | null | null | null |
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
johnson-watanabe-1990-relational
|
https://aclanthology.org/W90-0123
|
Relational-Grammar-Based Generation in the JETS Japanese-English Machine Translation System
|
This paper describes the design and functioning of the English generation phase in JETS, a limited transfer, Japanese-English machine translation system that is loosely based on the linguistic framework of relational grammar. To facilitate the development of relational-grammar-based generators, we have built an NL-and-application-independent generator shell and relational grammar rulewriting language. The implemented generator, GENIE, maps abstract canonical structures, representing the basic predicate-argument structures of sentences, into well-formed English sentences via a two-stage plan-and-execute design. This modularity permits the independent development of a very general, deterministic execution grammar that is driven by a set of planning rules sensitive to lexical, syntactic and stylistic constraints. Processing in GENIE is category-driven, i.e., grammatical rules are distributed over a part-of-speech hierarchy and, using an inheritance mechanism, are invoked only ff appropriate for the category being processed.
| false |
[] |
[] | null | null | null | null |
1990
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wise-2014-keynote
|
https://aclanthology.org/W14-4101
|
Keynote: Data Archeology: A theory informed approach to analyzing data traces of social interaction in large scale learning environments
|
Data archeology is a theoreticallyinformed approach to make sense of the digital artifacts left behind by a prior learning "civilization." Critical elements include use of theoretical learning models to construct analytic metrics, attention to temporality as a means to reconstruct individual and collective trajectories, and consideration of the pedagogical and technological structures framing activity. Examples of the approach using discussion forum trace data will be presented.
| false |
[] |
[] | null | null | null | null |
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
brants-xu-2009-distributed
|
https://aclanthology.org/N09-4002
|
Distributed Language Models
|
Language models are used in a wide variety of natural language applications, including machine translation, speech recognition, spelling correction, optical character recognition, etc. Recent studies have shown that more data is better data, and bigger language models are better language models: the authors found nearly constant machine translation improvements with each doubling of the training data size even at 2 trillion tokens (resulting in 400 billion n-grams). Training and using such large models is a challenge. This tutorial shows efficient methods for distributed training of large language models based on the MapReduce computing model. We also show efficient ways of using distributed models in which requesting individual n-grams is expensive because they require communication between different machines.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
larson-etal-2019-outlier
|
https://aclanthology.org/N19-1051
|
Outlier Detection for Improved Data Quality and Diversity in Dialog Systems
|
In a corpus of data, outliers are either errors: mistakes in the data that are counterproductive, or are unique: informative samples that improve model robustness. Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps. However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems. We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection. We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples. Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.
| false |
[] |
[] | null | null | null |
The authors thank Yiping Kang, Yunqi Zhang, Joseph Peper, and the anonymous reviewers for their helpful comments and feedback.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhai-etal-2021-script
|
https://aclanthology.org/2021.starsem-1.18
|
Script Parsing with Hierarchical Sequence Modelling
|
Scripts (Schank and Abelson, 1977) capture commonsense knowledge about everyday activities and their participants. Script knowledge has been shown to be useful in a number of NLP tasks, such as referent prediction, discourse classification, and story generation. A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity. This task is challenging: it requires information both about the ways events and participants are usually realized in surface language as well as the order in which they occur in the world. We show how to do accurate script parsing with a hierarchical sequence model. Our model improves the state of the art of event parsing by over 16 points F-score and, for the first time, accurately tags script participants.
| false |
[] |
[] | null | null | null |
We thank Simon Ostermann for providing the data from his experiments and his help all along the way of re-implementing his model. We thank Vera Demberg for the useful comments and suggestions. We also thank the anonymous reviewers for the valuable comments. This research was funded by the German Research Foundation (DFG) as part of SFB 1102 (Project-ID 232722074) "Information Density and Linguistic Encoding".
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ruiter-etal-2019-self
|
https://aclanthology.org/P19-1178
|
Self-Supervised Neural Machine Translation
|
We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2f r) and 27.36 (f r2en) on new-stest2014 using English and French Wikipedia data for training.
| false |
[] |
[] | null | null | null |
The project on which this paper is based was partially funded by the German Federal Ministry of Education and Research under the funding code 01IW17001 (Deeplee) and by the Leibniz Gemeinschaft via the SAW-2016-ZPID-2 project (CLuBS). Responsibility for the content of this publication is with the authors.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nie-etal-2020-adversarial
|
https://aclanthology.org/2020.acl-main.441
|
Adversarial NLI: A New Benchmark for Natural Language Understanding
|
We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-theart models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate.
| false |
[] |
[] | null | null | null |
YN interned at Facebook. YN and MB were sponsored by DARPA MCS Grant #N66001-19-2-4031, ONR Grant #N00014-18-1-2871, and DARPA YFA17-D17AP00022. Special thanks to Sam Bowman for comments on an earlier draft.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
nn-1997-neocortech
|
https://aclanthology.org/1997.mtsummit-exhibits.11
|
NeocorTech LLC
|
NEOCORTECH is the premier provider of Japanese communication solutions for computer users outside of Japan, using Microsoft Windows 32-bit operating systems. In fact, only NeocorTech's machine translation (MT) products have been awarded the coveted Microsoft Windows 95 compatibility logo. Long heralded as the final solution in international communications, MT is now available between Japanese and English on a Windows 95 or NT computer. With Neocor's Tsunami MT, Typhoon MT, and KanjiScan programs, you don't need any additional support software to recognize, display, edit, translate, print, and e-mail. TSUNAMI MT is NeocorTech's flagship software, and has been providing communication solutions for organizations and individuals that need to translate their English documents efficiently and effectively into Japanese. Tsunami's key features are high translation speed, unmatched accuracy, and the capability to operate on English Windows with an English interface, and an English manual (a huge benefit for American companies that do not use Japanese operating systems). In addition to superior translation speed and accuracy, Tsunami MT conies with Japanese TrueType fonts, allows users to type Kanji & Kana with an English keyboard, and offers flexible Japanese formality and grammar settings.
| false |
[] |
[] | null | null | null | null |
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
grouin-2008-certification
|
http://www.lrec-conf.org/proceedings/lrec2008/pdf/280_paper.pdf
|
Certification and Cleaning up of a Text Corpus: Towards an Evaluation of the ``Grammatical'' Quality of a Corpus
|
We present in this article the methods we used for obtaining measures to ensure the quality and well-formedness of a text corpus. These measures allow us to determine the compatibility of a corpus with the treatments we want to apply on it. We called this method "certification of corpus". These measures are based upon the characteristics required by the linguistic treatments we have to apply on the corpus we want to certify. Since the certification of corpus allows us to highlight the errors present in a text, we developed modules to carry out an automatic correction. By applying these modules, we reduced the number of errors. In consequence, it increases the quality of the corpus making it possible to use a corpus that a first certification would not have admitted.
| false |
[] |
[] | null | null | null |
This work has been done within the framework of the SEVEN 8 project, held by the ANR (project number: ANR-05-RNTL-02204 (S0604149W)).
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
grangier-auli-2018-quickedit
|
https://aclanthology.org/N18-1025
|
QuickEdit: Editing Text \& Translations by Crossing Words Out
|
We propose a framework for computerassisted text editing. It applies to translation post-editing and to paraphrasing. Our proposal relies on very simple interactions: a human editor modifies a sentence by marking tokens they would like the system to change. Our model then generates a new sentence which reformulates the initial sentence by avoiding marked words. The approach builds upon neural sequence-to-sequence modeling and introduces a neural network which takes as input a sentence along with change markers. Our model is trained on translation bitext by simulating post-edits. We demonstrate the advantage of our approach for translation postediting through simulated post-edits. We also evaluate our model for paraphrasing through a user study.
| false |
[] |
[] | null | null | null |
We thank Marc'Aurelio Ranzato, Sumit Chopra, Roman Novak for helpful discussions. We thank Sergey Edunov, Sam Gross, Myle Ott for writing the fairseq-py toolkit used in our experiments. We thank Jonathan Mallinson, Rico Sennrich, Mirella Lapata, for sharing ParaNet data.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
yakhnenko-rosario-2008-mining
|
https://aclanthology.org/I08-1036
|
Mining the Web for Relations between Digital Devices using a Probabilistic Maximum Margin Model
|
Searching and reading the Web is one of the principal methods used to seek out information to resolve problems about technology in general and digital devices in particular. This paper addresses the problem of text mining in the digital devices domain. In particular, we address the task of detecting semantic relations between digital devices in the text of Web pages. We use a Naïve Bayes model trained to maximize the margin and compare its performance with several other comparable methods. We construct a novel dataset which consists of segments of text extracted from the Web, where each segment contains pairs of devices. We also propose a novel, inexpensive and very effective way of getting people to label text data using a Web service, the Mechanical Turk. Our results show that the maximum margin model consistently outperforms the other methods.
| false |
[] |
[] | null | null | null |
The authors would like to thank the reviewers for their feedback and comments; William Schilit for invaluable insight and help and for first suggesting using the MTurk to gather labeled data; David McDonald for help with developing survey instructions; and numerous MT workers for providing the labels.
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
von-etter-etal-2010-assessment
|
https://aclanthology.org/W10-1105
|
Assessment of Utility in Web Mining for the Domain of Public Health
|
This paper presents ongoing work on application of Information Extraction (IE) technology to domain of Public Health, in a real-world scenario. A central issue in IE is the quality of the results. We present two novel points. First, we distinguish the criteria for quality: the objective criteria that measure correctness of the system's analysis in traditional terms (F-measure, recall and precision), and, on the other hand, subjective criteria that measure the utility of the results to the end-user. Second, to obtain measures of utility, we build an environment that allows users to interact with the system by rating the analyzed content. We then build and compare several classifiers that learn from the user's responses to predict the relevance scores for new events. We conduct experiments with learning to predict relevance, and discuss the results and their implications for text mining in the domain of Public Health.
| true |
[] |
[] |
Good Health and Well-Being
| null | null |
This research was supported in part by: the Technology Development Agency of Finland (TEKES), through the ContentFactory Project, and by the Academy of Finland's National Centre of Excellence "Algorithmic Data Analysis (ALGODAN)."
|
2010
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vydiswaran-etal-2019-towards
|
https://aclanthology.org/W19-3217
|
Towards Text Processing Pipelines to Identify Adverse Drug Events-related Tweets: University of Michigan @ SMM4H 2019 Task 1
|
We participated in Task 1 of the Social Media Mining for Health Applications (SMM4H) 2019 Shared Tasks on detecting mentions of adverse drug events (ADEs) in tweets. Our approach relied on a text processing pipeline for tweets, and training traditional machine learning and deep learning models. Our submitted runs performed above average for the task.
| true |
[] |
[] |
Good Health and Well-Being
| null | null | null |
2019
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
koehn-2016-computer
|
https://aclanthology.org/P16-5003
|
Computer Aided Translation
| null | false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
matusov-etal-2004-symmetric
|
https://aclanthology.org/C04-1032
|
Symmetric Word Alignments for Statistical Machine Translation
|
In this paper, we address the word alignment problem for statistical machine translation. We aim at creating a symmetric word alignment allowing for reliable one-to-many and many-to-one word relationships. We perform the iterative alignment training in the source-to-target and the target-to-source direction with the well-known IBM and HMM alignment models. Using these models, we robustly estimate the local costs of aligning a source word and a target word in each sentence pair. Then, we use efficient graph algorithms to determine the symmetric alignment with minimal total costs (i. e. maximal alignment probability). We evaluate the automatic alignments created in this way on the German-English Verbmobil task and the French-English Canadian Hansards task. We show statistically significant improvements of the alignment quality compared to the best results reported so far. On the Verbmobil task, we achieve an improvement of more than 1% absolute over the baseline error rate of 4.7%.
| false |
[] |
[] | null | null | null |
This work has been partially funded by the EU project TransType 2, IST-2001-32091.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
brixey-etal-2018-chahta
|
https://aclanthology.org/L18-1532
|
Chahta Anumpa: A multimodal corpus of the Choctaw Language
|
This paper presents a general use corpus for the Native American indigenous language Choctaw. The corpus contains audio, video, and text resources, with many texts also translated in English. The Oklahoma Choctaw and the Mississippi Choctaw variants of the language are represented in the corpus. The data set provides documentation support for the threatened language, and allows researchers and language teachers access to a diverse collection of resources.
| false |
[] |
[] | null | null | null | null |
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schubert-pelletier-1982-english
|
https://aclanthology.org/J82-1003
|
From English to Logic: Context-Free Computation of `Conventional' Logical Translation
|
We describe an approach to parsing and logical translation that was inspired by Gazdar's work on context-free grammar for English. Each grammar rule consists of a syntactic part that specifies an acceptable fragment of a parse tree, and a semantic part that specifies how the logical formulas corresponding to the constituents of the fragment are to be combined to yield the formula for the fragment. However, we have sought to reformulate Gazdar's semantic rules so as to obtain more or less 'conventional' logical translations of English sentences, avoiding the interpretation of NPs as property sets and the use of intensional functors other than certain propositional operators. The reformulated semantic rules often turn out to be slightly simpler than Gazdar's. Moreover, by using a semantically ambiguous logical syntax for the preliminary translations, we can account for quantifier and coordinator scope ambiguities in syntactically unambiguous sentences without recourse to multiple semantic rules, and are able to separate the disambiguation process from the operation of the parser-translator. We have implemented simple recursive descent and left-corner parsers to demonstrate the practicality of our approach.
| false |
[] |
[] | null | null | null |
The authors are indebted to Ivan Sag for a series of very stimulating seminars held by him at the University of Alberta on his linguistic research, and valuable follow-up discussions.The helpful comments of the referees and of Lotfi Zadeh are also appreciated.The research was supported in part by NSERC Operating Grants A8818 and A2252; preliminary work on the left-corner parser was carried out by one of the authors (LKS) under an Alexander von Humboldt fellowship in 1978-79.
|
1982
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kahn-etal-2004-parsing
|
https://aclanthology.org/N04-4032
|
Parsing Conversational Speech Using Enhanced Segmentation
|
The lack of sentence boundaries and presence of disfluencies pose difficulties for parsing conversational speech. This work investigates the effects of automatically detecting these phenomena on a probabilistic parser's performance. We demonstrate that a state-of-the-art segmenter, relative to a pause-based segmenter, gives more than 45% of the possible error reduction in parser performance, and that presentation of interruption points to the parser improves performance over using sentence boundaries alone.
| false |
[] |
[] | null | null | null |
We thank J. Kim for providing the SU-IP detection results, using tools developed under DARPA grant MDA904-02-C-0437. This work is supported by NSF grant no. IIS085940. Any opinions or conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of these agencies.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
feng-etal-2004-new
|
https://aclanthology.org/W04-3248
|
A New Approach for English-Chinese Named Entity Alignment
|
Traditional word alignment approaches cannot come up with satisfactory results for Named Entities. In this paper, we propose a novel approach using a maximum entropy model for named entity alignment. To ease the training of the maximum entropy model, bootstrapping is used to help supervised learning. Unlike previous work reported in the literature, our work conducts bilingual Named Entity alignment without word segmentation for Chinese and its performance is much better than that with word segmentation. When compared with IBM and HMM alignment models, experimental results show that our approach outperforms IBM Model 4 and HMM significantly.
| false |
[] |
[] | null | null | null |
Thanks to Hang Li, Changning Huang, Yunbo Cao, and John Chen for their valuable comments on this work. Also thank Kevin Knight for his checking of the English of this paper. Special thanks go to Eduard Hovy for his continuous support and encouragement while the first author was visiting MSRA.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-hulden-2022-detecting
|
https://aclanthology.org/2022.acl-short.19
|
Detecting Annotation Errors in Morphological Data with the Transformer
|
Annotation errors that stem from various sources are usually unavoidable when performing large-scale annotation of linguistic data. In this paper, we evaluate the feasibility of using the Transformer model to detect various types of annotator errors in type-based morphological datasets that contain inflected word forms. We evaluate our error detection model on four languages by injecting three different types of artificial errors into the data: (1) typographic errors, where single characters in the data are inserted, replaced, or deleted; (2) linguistic confusion errors where two inflected forms are systematically swapped; and (3) self-adversarial errors where the Transformer model itself is used to generate plausible-looking, but erroneous forms by retrieving high-scoring predictions from a Transformer search beam. Results show that the model can with perfect, or nearperfect recall detect errors in all three scenarios, even when significant amounts of the annotated data (5%-30%) are corrupted on all languages tested. Precision varies across the languages and types of errors, but is high enough that the model can reliably be used to flag suspicious entries in large datasets for further scrutiny by human annotators.
| false |
[] |
[] | null | null | null | null |
2022
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kim-etal-2017-adversarial
|
https://aclanthology.org/P17-1119
|
Adversarial Adaptation of Synthetic or Stale Data
|
Two types of data shift common in practice are 1. transferring from synthetic data to live user data (a deployment shift), and 2. transferring from stale data to current data (a temporal shift). Both cause a distribution mismatch between training and evaluation, leading to a model that overfits the flawed training data and performs poorly on the test data. We propose a solution to this mismatch problem by framing it as domain adaptation, treating the flawed training dataset as a source domain and the evaluation dataset as a target domain. To this end, we use and build on several recent advances in neural domain adaptation such as adversarial training (Ganin et al., 2016) and domain separation network (Bousmalis et al., 2016), proposing a new effective adversarial training scheme. In both supervised and unsupervised adaptation scenarios, our approach yields clear improvement over strong baselines.
| false |
[] |
[] | null | null | null | null |
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
orr-etal-2014-semi
|
http://www.lrec-conf.org/proceedings/lrec2014/pdf/511_Paper.pdf
|
Semi-automatic annotation of the UCU accents speech corpus
|
The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%
| false |
[] |
[] | null | null | null | null |
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
poncelas-etal-2019-combining
|
https://aclanthology.org/R19-1107
|
Combining PBSMT and NMT Back-translated Data for Efficient NMT
|
Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016a), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for backtranslation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.
| false |
[] |
[] | null | null | null |
This research has been supported by the ADAPT Centre for Digital Content Technology which is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund.This work has also received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 713567.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
patra-etal-2016-multimodal
|
https://aclanthology.org/C16-1186
|
Multimodal Mood Classification - A Case Study of Differences in Hindi and Western Songs
|
Music information retrieval has emerged as a mainstream research area in the past two decades. Experiments on music mood classification have been performed mainly on Western music based on audio, lyrics and a combination of both. Unfortunately, due to the scarcity of digitalized resources, Indian music fares poorly in music mood retrieval research. In this paper, we identified the mood taxonomy and prepared multimodal mood annotated datasets for Hindi and Western songs. We identified important audio and lyric features using correlation based feature selection technique. Finally, we developed mood classification systems using Support Vector Machines and Feed Forward Neural Networks based on the features collected from audio, lyrics, and a combination of both. The best performing multimodal systems achieved F-measures of 75.1 and 83.5 for classifying the moods of the Hindi and Western songs respectively using Feed Forward Neural Networks. A comparative analysis indicates that the selected features work well for mood classification of the Western songs and produces better results as compared to the mood classification systems for Hindi songs.
| false |
[] |
[] | null | null | null |
The work reported in this paper is supported by a grant from the "Visvesvaraya Ph.D. Scheme for Electronics and IT" funded by Media Lab Asia of Ministry of Electronics and Information Technology (Me-itY), Government of India.
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
burga-etal-2015-towards
|
https://aclanthology.org/W15-2107
|
Towards a multi-layered dependency annotation of Finnish
|
We present a dependency annotation scheme for Finnish which aims at respecting the multilayered nature of language. We first tackle the annotation of surfacesyntactic structures (SSyntS) as inspired by the Meaning-Text framework. Exclusively syntactic criteria are used when defining the surface-syntactic relations tagset. Our annotation scheme allows for a direct mapping between surface-syntax and a more semantics-oriented representation, in particular predicate-argument structures. It has been applied to a corpus of Finnish, composed of 2,025 sentences related to weather conditions.
| false |
[] |
[] | null | null | null |
The work described in this paper has been carried out in the framework of the project Personalized Environmental Service Configuration and Delivery Orchestration (PESCaDO), supported by the European Commission under the contract number FP7-ICT-248594.
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
iwai-etal-2019-applying
|
https://aclanthology.org/W19-6704
|
Applying Machine Translation to Psychology: Automatic Translation of Personality Adjectives
|
We introduce our approach to apply machine translation to psychology, especially to translate English adjectives in a psychological personality questionnaire. We first extend seed English personality adjectives with a word2vec model trained with web sentences, and then feed the acquired words to a phrase-based machine translation model. We use Moses trained with bilingual corpora that consist of TED subtitles, movie' subtitles and Wikipedia. We collect Japanese translations whose translation probabilities are higher than .01 and filter them based on human evaluations. This resulted in 507 Japanese personality descriptors. We conducted a web-survey (N=17,751) and finalized a personality questionnaire. Statistical analyses supported the five-factor structure, reliability and criterion-validity of the newly developed questionnaire. This shows the potential applicability of machine translation to psychology. We discuss further issues related to machine translation application to psychology.
| false |
[] |
[] | null | null | null | null |
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
samuel-etal-1998-dialogue-act
|
https://aclanthology.org/P98-2188
|
Dialogue Act Tagging with Transformation-Based Learning
|
For the task of recognizing dialogue acts, we are applying the Transformation-Based Learning (TBL) machine learning algorithm. To circumvent a sparse data problem, we extract values of well-motivated features of utterances, such as speaker direction, punctuation marks, and a new feature, called dialogue act cues, which we find to be more effective than cue phrases and word n-grams in practice. We present strategies for constructing a set of dialogue act cues automatically by minimizing the entropy of the distribution of dialogue acts in a training corpus, filtering out irrelevant dialogue act cues, and clustering semantically-related words. In addition, to address limitations of TBL, we introduce a Monte Carlo strategy for training efficiently and a committee method for computing confidence measures. These ideas are combined in our working implementation, which labels held-out data as accurately as any other reported system for the dialogue act tagging task.
| false |
[] |
[] | null | null | null |
We wish to thank the members of the VERBMo-BIL research group at DFKI in Germany, particularly Norbert Reithinger, Jan Alexandersson, and Elisabeth Maier, for providing us with the opportunity to work with them and generously granting us access to the VERBMOBIL corpora. This work was partially supported by the NSF Grant #GER-9354869.
|
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bod-2007-unsupervised
|
https://aclanthology.org/2007.mtsummit-papers.8
|
Unsupervised syntax-based machine translation: the contribution of discontiguous phrases
|
We present a new unsupervised syntax-based MT system, termed U-DOT, which uses the unsupervised U-DOP model for learning paired trees, and which computes the most probable target sentence from the relative frequencies of paired subtrees. We test U-DOT on the German-English Europarl corpus, showing that it outperforms the state-of-the-art phrase-based Pharaoh system. We demonstrate that the inclusion of noncontiguous phrases significantly improves the translation accuracy. This paper presents the first translation results with the data-oriented translation (DOT) model on the Europarl corpus, to the best of our knowledge.
| false |
[] |
[] | null | null | null | null |
2007
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhang-2018-comparison
|
https://aclanthology.org/Y18-1095
|
A Comparison of Tone Normalization Methods for Language Variation Research
|
One methodological issue in tonal acoustic analyses is revisited and resolved in this study. Previous tone normalization methods mainly served for categorizing tones but did not aim to preserve sociolinguistic variation. This study, from the perspective of variationist studies, reevaluates the effectiveness of sixteen tone normalization methods and finds that the best tone normalization method is a semitone transformation relative to each speaker's average pitch in hertz.
| false |
[] |
[] | null | null | null |
The author would like to thank Utrecht Institute of Linguistics of Utrecht University, Chinese Scholarship Council and the University of Macau (Startup Research Grant SRG2018-00131-FAH) for supporting this study. Thanks also go to Prof. René Kager and Dr. Hans van de Velde for their very helpful comments and suggestions.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
miller-2009-improved
|
https://aclanthology.org/N09-1074
|
Improved Syntactic Models for Parsing Speech with Repairs
|
This paper introduces three new syntactic models for representing speech with repairs. These models are developed to test the intuition that the erroneous parts of speech repairs (reparanda) are not generated or recognized as such while occurring, but only after they have been corrected. Thus, they are designed to minimize the differences in grammar rule applications between fluent and disfluent speech containing similar structure. The three models considered in this paper are also designed to isolate the mechanism of impact, by systematically exploring different variables.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
khwaileh-al-asad-2020-elmo
|
https://aclanthology.org/2020.semeval-1.130
|
ELMo-NB at SemEval-2020 Task 7: Assessing Sense of Humor in EditedNews Headlines Using ELMo and NB
|
In this paper, we present our submission for SemEval-2020 competition subtask 1 in Task 7 (Hossain et al., 2020a): Assessing Humor in Edited News Headlines. The task consists of estimating the hilariousness of news headlines that have been modified manually by humans using micro-edit changes to make them funny. Our approach is constructed to improve on a couple of aspects; preprocessing with an emphasis on humor sense detection, using embeddings from state-of-the-art language model (ELMo), and ensembling the results came up with using machine learning model Naïve Bayes (NB) with a deep learning pretrained models. ELMo-NB participation has scored (0.5642) on the competition leader board, where results were measured by Root Mean Squared Error (RMSE).
| false |
[] |
[] | null | null | null |
We would like to extend our sincere thanks to Dr. Malak Abdullah for her efforts and support. In order to finish this work, we had a lot of straight directions and advice from her, during the fall semester, 2019.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
liu-etal-2020-metaphor
|
https://aclanthology.org/2020.figlang-1.34
|
Metaphor Detection Using Contextual Word Embeddings From Transformers
|
The detection of metaphors can provide valuable information about a given text and is crucial to sentiment analysis and machine translation. In this paper, we outline the techniques for word-level metaphor detection used in our submission to the Second Shared Task on Metaphor Detection. We propose using both BERT and XLNet language models to create contextualized embeddings and a bidirectional LSTM to identify whether a given word is a metaphor. Our best model achieved F1-scores of 68.0% on VUA AllPOS, 73.0% on VUA Verbs, 66.9% on TOEFL AllPOS, and 69.7% on TOEFL Verbs, placing 7th, 6th, 5th, and 5th respectively. In addition, we outline another potential approach with a KNN-LSTM ensemble model that we did not have enough time to implement given the deadline for the competition. We show that a KNN classifier provides a similar F1-score on a validation set as the LSTM and yields different information on metaphors.
| false |
[] |
[] | null | null | null |
The authors thank the organizers of the Second Shared Task on Metaphor Detection and the rest of the Duke Data Science Team. We also thank the anonymous reviewers for their insightful comments.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vertanen-kristensson-2011-imagination
|
https://aclanthology.org/D11-1065
|
The Imagination of Crowds: Conversational AAC Language Modeling using Crowdsourcing and Large Data Sources
|
Augmented and alternative communication (AAC) devices enable users with certain communication disabilities to participate in everyday conversations. Such devices often rely on statistical language models to improve text entry by offering word predictions. These predictions can be improved if the language model is trained on data that closely reflects the style of the users' intended communications. Unfortunately, there is no large dataset consisting of genuine AAC messages. In this paper we demonstrate how we can crowdsource the creation of a large set of fictional AAC messages. We show that these messages model conversational AAC better than the currently used datasets based on telephone conversations or newswire text. We leverage our crowdsourced messages to intelligently select sentences from much larger sets of Twitter, blog and Usenet data. Compared to a model trained only on telephone transcripts, our best performing model reduced perplexity on three test sets of AAC-like communications by 60-82% relative. This translated to a potential keystroke savings in a predictive keyboard interface of 5-11%.
| false |
[] |
[] | null | null | null |
We thank Keith Trnka and Horabail Venkatagiri for their assistance. This work was supported by the Engineering and Physical Sciences Research Council (grant number EP/H027408/1).
|
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
artetxe-etal-2019-effective
|
https://aclanthology.org/P19-1019
|
An Effective Approach to Unsupervised Machine Translation
|
While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through onthe-fly back-translation. Together, we obtain large improvements over the previous stateof-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.
| false |
[] |
[] | null | null | null |
This research was partially supported by the Spanish MINECO (UnsupNMT TIN2017-91692-EXP and DOMINO PGC2018-102041-B-I00, cofunded by EU FEDER), the BigKnowledge project (BBVA foundation grant 2018), the UPV/EHU (excellence research group), and the NVIDIA GPU grant program. Mikel Artetxe was supported by a doctoral grant from the Spanish MECD.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
melamud-etal-2016-role
|
https://aclanthology.org/N16-1118
|
The Role of Context Types and Dimensionality in Learning Word Embeddings
|
We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.
| false |
[] |
[] | null | null | null |
We thank Do Kook Choe for providing us the jackknifed version of WSJ. We also wish to thank the IBM Watson team for helpful discussions and our anonymous reviewers for their comments. This work was partially supported by the Israel Science Foundation grant 880/12 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).
|
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tillmann-2004-unigram
|
https://aclanthology.org/N04-4026
|
A Unigram Orientation Model for Statistical Machine Translation
|
In this paper, we present a unigram segmentation model for statistical machine translation where the segmentation units are blocks: pairs of phrases without internal structure. The segmentation model uses a novel orientation component to handle swapping of neighbor blocks. During training, we collect block unigram counts with orientation: we count how often a block occurs to the left or to the right of some predecessor block. The orientation model is shown to improve translation performance over two models: 1) no block reordering is used, and 2) the block swapping is controlled only by a language model. We show experimental results on a standard Arabic-English translation task.
| false |
[] |
[] | null | null | null |
This work was partially supported by DARPA and monitored by SPAWAR under contract No. N66001-99-2-8916. The paper has greatly profited from discussion with Kishore Papineni and Fei Xia.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2019-role
|
https://aclanthology.org/D19-6405
|
On the Role of Scene Graphs in Image Captioning
|
Scene graphs represent semantic information in images, which can help image captioning system to produce more descriptive outputs versus using only the image as context. Recent captioning approaches rely on ad-hoc approaches to obtain graphs for images. However, those graphs introduce noise and it is unclear the effect of parser errors on captioning accuracy. In this work, we investigate to what extent scene graphs can help image captioning. Our results show that a state-of-the-art scene graph parser can boost performance almost as much as the ground truth graphs, showing that the bottleneck currently resides more on the captioning models than on the performance of the scene graph parser.
| false |
[] |
[] | null | null | null | null |
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
gong-etal-2015-hashtag
|
https://aclanthology.org/D15-1046
|
Hashtag Recommendation Using Dirichlet Process Mixture Models Incorporating Types of Hashtags
|
In recent years, the task of recommending hashtags for microblogs has been given increasing attention. Various methods have been proposed to study the problem from different aspects. However, most of the recent studies have not considered the differences in the types or uses of hashtags. In this paper, we introduce a novel nonparametric Bayesian method for this task. Based on the Dirichlet Process Mixture Models (DPMM), we incorporate the type of hashtag as a hidden variable. The results of experiments on the data collected from a real world microblogging service demonstrate that the proposed method outperforms stateof-the-art methods that do not consider these aspects. By taking these aspects into consideration, the relative improvement of the proposed method over the state-of-theart methods is around 12.2% in F1-score.
| false |
[] |
[] | null | null | null |
tially funded by National Natural Science Foundation of China (No. 61473092 and 61472088), the National High Technology Research and Development Program of China (No. 2015AA011802), and Shanghai Science and Technology Development Funds (13dz226020013511504300).
|
2015
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ambati-etal-2010-active-semi
|
https://aclanthology.org/W10-0102
|
Active Semi-Supervised Learning for Improving Word Alignment
|
Word alignment models form an important part of building statistical machine translation systems. Semi-supervised word alignment aims to improve the accuracy of automatic word alignment by incorporating full or partial alignments acquired from humans. Such dedicated elicitation effort is often expensive and depends on availability of bilingual speakers for the language-pair. In this paper we study active learning query strategies to carefully identify highly uncertain or most informative alignment links that are proposed under an unsupervised word alignment model. Manual correction of such informative links can then be applied to create a labeled dataset used by a semi-supervised word alignment model. Our experiments show that using active learning leads to maximal reduction of alignment error rates with reduced human effort.
| false |
[] |
[] | null | null | null |
This research was partially supported by DARPA under grant NBCHC080097. Any opinions, findings, and conclusions expressed in this paper are those of the authors and do not necessarily reflect the views of the DARPA. The first author would like to thank Qin Gao for the semi-supervised word alignment software and help with running experiments.
|
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
tilk-alumae-2017-low
|
https://aclanthology.org/W17-4503
|
Low-Resource Neural Headline Generation
|
Recent neural headline generation models have shown great results, but are generally trained on very large datasets. We focus our efforts on improving headline quality on smaller datasets by the means of pretraining. We propose new methods that enable pre-training all the parameters of the model and utilize all available text, resulting in improvements by up to 32.4% relative in perplexity and 2.84 points in ROUGE.
| false |
[] |
[] | null | null | null |
We would like to thank NVIDIA for the donated GPU, the anonymous reviewers for their valuable comments, and Kyunghyun Cho for the help with the CNN/Daily Mail dataset.
|
2017
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
stafanovics-etal-2020-mitigating
|
https://aclanthology.org/2020.wmt-1.73
|
Mitigating Gender Bias in Machine Translation with Target Gender Annotations
|
When translating "The secretary asked for details." to a language with grammatical gender, it might be necessary to determine the gender of the subject "secretary". If the sentence does not contain the necessary information, it is not always possible to disambiguate. In such cases, machine translation systems select the most common translation option, which often corresponds to the stereotypical translations, thus potentially exacerbating prejudice and marginalisation of certain groups and people. We argue that the information necessary for an adequate translation can not always be deduced from the sentence being translated or even might depend on external knowledge. Therefore, in this work, we propose to decouple the task of acquiring the necessary information from the task of learning to translate correctly when such information is available. To that end, we present a method for training machine translation systems to use word-level annotations containing information about subject's gender. To prepare training data, we annotate regular source language words with grammatical gender information of the corresponding target language words. Using such data to train machine translation systems reduces their reliance on gender stereotypes when information about the subject's gender is available. Our experiments on five language pairs show that this allows improving accuracy on the WinoMT test set by up to 25.8 percentage points.
| true |
[] |
[] |
Gender Equality
| null | null |
This research was partly done within the scope of the undergraduate thesis project of the first author at the University of Latvia and supervised at Tilde. This research has been supported by the European Regional Development Fund within the joint project of SIA TILDE and University of Latvia "Multilingual Artificial Intelligence Based Human Computer Interaction" No. 1.1.1.1/18/A/148.
|
2020
| false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false |
tan-etal-2014-sensible
|
https://aclanthology.org/S14-2094
|
Sensible: L2 Translation Assistance by Emulating the Manual Post-Editing Process
|
This paper describes the Post-Editor Z system submitted to the L2 writing assistant task in SemEval-2014. The aim of task is to build a translation assistance system to translate untranslated sentence fragments. This is not unlike the task of post-editing where human translators improve machine-generated translations. Post-Editor Z emulates the manual process of post-editing by (i) crawling and extracting parallel sentences that contain the untranslated fragments from a Web-based translation memory, (ii) extracting the possible translations of the fragments indexed by the translation memory and (iii) applying simple cosine-based sentence similarity to rank possible translations for the untranslated fragment.
| false |
[] |
[] | null | null | null |
The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union's Seventh Framework Programme FP7/2007-2013/ under REA grant agreement n • 317471.
|
2014
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
eckart-etal-2012-influence
|
http://www.lrec-conf.org/proceedings/lrec2012/pdf/476_Paper.pdf
|
The Influence of Corpus Quality on Statistical Measurements on Language Resources
|
The quality of statistical measurements on corpora is strongly related to a strict definition of the measuring process and to corpus quality. In the case of multiple result inspections, an exact measurement of previously specified parameters ensures compatibility of the different measurements performed by different researchers on possibly different objects. Hence, the comparison of different values requires an exact description of the measuring process. To illustrate this correlation the influence of different definitions for the concepts word and sentence is shown for several properties of large text corpora. It is also shown that corpus pre-processing strongly influences corpus size and quality as well. As an example near duplicate sentences are identified as source of many statistical irregularities. The problem of strongly varying results especially holds for Web corpora with a large set of pre-processing steps. Here, a well-defined and language independent pre-processing is indispensable for language comparison based on measured values. Conversely, irregularities found in such measurements are often a result of poor pre-processing and therefore such measurements can help to improve corpus quality.
| false |
[] |
[] | null | null | null | null |
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
fujiki-etal-2003-automatic
|
https://aclanthology.org/E03-1061
|
Automatic Acquisition of Script Knowledge from a Text Collection
|
In this paper, we describe a method for automatic acquisition of script knowledge from a Japanese text collection. Script knowledge represents a typical sequence of actions that occur in a particular situation. We extracted sequences (pairs) of actions occurring in time order from a Japanese text collection and then chose those that were typical of certain situations by ranking these sequences (pairs) in terms of the frequency of their occurrence. To extract sequences of actions occurring in time order, we constructed a text collection in which texts describing facts relating to a similar situation were clustered together and arranged in time order. We also describe a preliminary experiment with our acquisition system and discuss the results.
| false |
[] |
[] | null | null | null | null |
2003
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
utt-etal-2013-curious
|
https://aclanthology.org/W13-0604
|
The Curious Case of Metonymic Verbs: A Distributional Characterization
|
Logical metonymy combines an event-selecting verb with an entity-denoting noun (e.g., The writer began the novel), triggering a covert event interpretation (e.g., reading, writing). Experimental investigations of logical metonymy must assume a binary distinction between metonymic (i.e. eventselecting) verbs and non-metonymic verbs to establish a control condition. However, this binary distinction (whether a verb is metonymic or not) is mostly made on intuitive grounds, which introduces a potential confounding factor. We describe a corpus-based approach which characterizes verbs in terms of their behavior at the syntax-semantics interface. The model assesses the extent to which transitive verbs prefer event-denoting objects over entity-denoting objects. We then test this "eventhood" measure on psycholinguistic datasets, showing that it can distinguish not only metonymic from non-metonymic verbs, but that it can also capture more fine-grained distinctions among different classes of metonymic verbs, putting such distinctions into a new graded perspective.
| false |
[] |
[] | null | null | null |
Acknowledgements The research for this paper was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft) as part of the SFB 732 "Incremental specification in context" / project D6 "Lexical-semantic factors in event interpretation" at the University of Stuttgart.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
hsu-glass-2008-n
|
https://aclanthology.org/D08-1087
|
N-gram Weighting: Reducing Training Data Mismatch in Cross-Domain Language Model Estimation
|
In domains with insufficient matched training data, language models are often constructed by interpolating component models trained from partially matched corpora. Since the ngrams from such corpora may not be of equal relevance to the target domain, we propose an n-gram weighting technique to adjust the component n-gram probabilities based on features derived from readily available segmentation and metadata information for each corpus. Using a log-linear combination of such features, the resulting model achieves up to a 1.2% absolute word error rate reduction over a linearly interpolated baseline language model on a lecture transcription task.
| false |
[] |
[] | null | null | null |
We would like to thank the anonymous reviewers for their constructive feedback. This research is supported in part by the T-Party Project, a joint research program between MIT and Quanta Computer Inc.
|
2008
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
berend-etal-2013-lfg
|
https://aclanthology.org/W13-3608
|
LFG-based Features for Noun Number and Article Grammatical Errors
|
We introduce here a participating system of the CoNLL-2013 Shared Task "Grammatical Error Correction". We focused on the noun number and article error categories and constructed a supervised learning system for solving these tasks. We carried out feature engineering and we found that (among others) the f-structure of an LFG parser can provide very informative features for the machine learning system.
| false |
[] |
[] | null | null | null |
This work was supported in part by the European Union and the European Social Fund through the project FuturICT.hu (grant no.: TÁMOP-4.2.2.C-11/1/KONV-2012-0013).
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
guo-etal-2018-soft
|
https://aclanthology.org/P18-1064
|
Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation
|
An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multitask architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-ofthe-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.
| false |
[] |
[] | null | null | null |
We thank the reviewers for their helpful comments. This work was supported by DARPA (YFA17-D17AP00022), Google Faculty Research Award, Bloomberg Data Science Research Grant, and NVidia GPU awards. The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
schulz-etal-2019-analysis
|
https://aclanthology.org/P19-1265
|
Analysis of Automatic Annotation Suggestions for Hard Discourse-Level Tasks in Expert Domains
|
Many complex discourse-level tasks can aid domain experts in their work but require costly expert annotations for data creation. To speed up and ease annotations, we investigate the viability of automatically generated annotation suggestions for such tasks. As an example, we choose a task that is particularly hard for both humans and machines: the segmentation and classification of epistemic activities in diagnostic reasoning texts. We create and publish a new dataset covering two domains and carefully analyse the suggested annotations. We find that suggestions have positive effects on annotation speed and performance, while not introducing noteworthy biases. Envisioning suggestion models that improve with newly annotated texts, we contrast methods for continuous model adjustment and suggest the most effective setup for suggestions in future expert tasks.
| false |
[] |
[] | null | null | null |
This work was supported by the German Federal Ministry of Education and Research (BMBF) under the reference 16DHL1040 (FAMULUS). We thank our annotators M. Achtner, S. Eichler, V. Jung, H. Mißbach, K. Nederstigt, P. Schäffner, R. Schönberger, and H. Werl. We also acknowledge Samaun Ibna Faiz for his contributions to the model adjustment experiments.
|
2019
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
de-melo-bansal-2013-good
|
https://aclanthology.org/Q13-1023
|
Good, Great, Excellent: Global Inference of Semantic Intensities
|
Adjectives like good, great, and excellent are similar in meaning, but differ in intensity. Intensity order information is very useful for language learners as well as in several NLP tasks, but is missing in most lexical resources (dictionaries, WordNet, and thesauri). In this paper, we present a primarily unsupervised approach that uses semantics from Web-scale data (e.g., phrases like good but not excellent) to rank words by assigning them positions on a continuous scale. We rely on Mixed Integer Linear Programming to jointly determine the ranks, such that individual decisions benefit from global information. When ranking English adjectives, our global algorithm achieves substantial improvements over previous work on both pairwise and rank correlation metrics (specifically, 70% pairwise accuracy as compared to only 56% by previous work). Moreover, our approach can incorporate external synonymy information (increasing its pairwise accuracy to 78%) and extends easily to new languages. We also make our code and data freely available. 1
| false |
[] |
[] | null | null | null |
We would like to thank the editor and the anonymous reviewers for their helpful feedback.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
mizuta-2004-analysis
|
https://aclanthology.org/Y04-1007
|
An Analysis of Japanese ta / teiru in a Dynamic Semantics Framework and a Comparison with Korean Temporal Markers a nohta / a twuta
|
In this paper I will shed new light on the semantics of Japanese tense-aspect markers ta and teiru from dynamic semantics and contrastive perspectives. The focus of investigation will be on the essential difference between ta and teiru used in an aspectual sense related to a perfect. I analyze the asymmetry between ta and teiru with empirical data and illustrate it in the DRT framework (Discourse Representation Theory: Kamp and Reyle (1993)). Defending the intuition that ta and teiru make respectively an eventive and a stative description of eventualities, I argue that ta is committed to an assertion of the triggering event whereas teiru is not. In the case of teiru, a triggering event, if there is any, is only entailed. In DRT, ta and teiru introduce respectively an event and a state as a codition into the main DRS. Teiru may introduce a triggering event only as a codition in an embedded DRS. I also illustrate how the proposed analysis of the perfect meaning fits into a more general scheme of ta and teiru. and analyze ta and teiru in a discourse. Furthermore, in DRT terms, I will compare Japanese ta / teiru with Korean perfect-related temporal markers a nohta / a twuta in light of Lee (1996). (1) a. [The water in the kettle comes to the boil while the speaker sees it.] Yoshi, o-yu-ga wai-ta / ?? teiru. All right, Hon-hot-water-Nom (come-to-the-)boil-Past / State-Nonpast o.k. 'All right, the water has (just) come to the boil.' / ?? 'The water is on the boil.' b. [The speaker put the kettle on the gas and left. Some time later he comes back and finds the water boiling.] Ah, o-yu-ga wai-ta / teiru. 'Oh, the water has come to the boil.' / 'Oh, the water is on the boil.' c. [The speaker comes to the kitchen and finds the water boiling. (He doesn't know who put the kettle on the gas.)] 1 I assume that the Japanese tense / aspect is encoded in terms of tei(ru) (stative) / non-tei(ru) (non-stative) forms and ta (past) / non-ta (nonpast) forms. Here I focus on 'non-tei(ru) + ta' and 'tei(ru) + non-ta' combinations. 2 For practical reasons I use a single gloss 'Past' for ta with any meaning.
| false |
[] |
[] | null | null | null |
I am grateful to the anonymous reviewer of my abstract and to Norihiro Ogata (Osaka University) for helpful comments. Shortcomings are of course solely mine.
|
2004
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
obermeier-1985-temporal
|
https://aclanthology.org/P85-1002
|
Temporal Inferences in Medical Texts
|
The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language
[NL] proeessin~ system [NI,PSI which could be used for different domains.
| true |
[] |
[] |
Good Health and Well-Being
| null | null | null |
1985
| false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
matsuzaki-etal-2013-complexity
|
https://aclanthology.org/I13-1009
|
The Complexity of Math Problems -- Linguistic, or Computational?
|
We present a simple, logic-based architecture for solving math problems written in natural language. A problem is firstly translated to a logical form. It is then rewritten into the input language of a solver algorithm and finally the solver finds an answer. Such a clean decomposition of the task however does not come for free. First, despite its formality, math text still exploits the flexibility of natural language to convey its complex logical content succinctly. We propose a mechanism to fill the gap between the simple form and the complex meaning while adhering to the principle of compositionality. Second, since the input to the solver is derived by strictly following the text, it may require far more computation than those derived by a human, and may go beyond the capability of the current solvers. Empirical study on Japanese university entrance examination problems showed positive results indicating the viability of the approach, which opens up a way towards a true end-to-end problem solving system through the synthesis of the advances in linguistics, NLP, and computer math.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wang-etal-2021-predicting
|
https://aclanthology.org/2021.rocling-1.18
|
Predicting elders' cognitive flexibility from their language use
|
Increasing research efforts are directed towards the relationship between cognitive decline and language use. However, few of them had focused specifically on how language use is related to cognitive flexibility. This study recruited 51 elders aged 53-74 to discuss their daily activities in focus groups. The transcribed discourse was analyzed using the Chinese version of LIWC (Lin et al., 2020; Pennebaker et al., 2015) for cognitive complexity and dynamic language as well as content words related to elders' daily activities. The interruption behavior during conversation was also analyzed. The results showed that, after controlling for education, gender and age, cognitive flexibility performance was accompanied by the increasing adoption of dynamic language, insight words and family words. These findings serve as the basis for the prediction of elders' cognitive flexibility through their daily language use.
| true |
[] |
[] |
Good Health and Well-Being
|
Reduced Inequalities
| null | null |
2021
| false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false |
yao-etal-2012-probabilistic
|
https://aclanthology.org/W12-3022
|
Probabilistic Databases of Universal Schema
|
In data integration we transform information from a source into a target schema. A general problem in this task is loss of fidelity and coverage: the source expresses more knowledge than can fit into the target schema, or knowledge that is hard to fit into any schema at all. This problem is taken to an extreme in information extraction (IE) where the source is natural language. To address this issue, one can either automatically learn a latent schema emergent in text (a brittle and ill-defined task), or manually extend schemas. We propose instead to store data in a probabilistic database of universal schema. This schema is simply the union of all source schemas, and the probabilistic database learns how to predict the cells of each source relation in this union. For example, the database could store Freebase relations and relations that correspond to natural language surface patterns. The database would learn to predict what freebase relations hold true based on what surface patterns appear, and vice versa. We describe an analogy between such databases and collaborative filtering models, and use it to implement our paradigm with probabilistic PCA, a scalable and effective collaborative filtering method.
| false |
[] |
[] | null | null | null |
This work was supported in part by the Center for Intelligent Information Retrieval and the University of Massachusetts and in part by UPenn NSF medium IIS-0803847. We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of DARPA, AFRL, or the US government.
|
2012
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
wu-etal-2006-computational
|
https://aclanthology.org/P06-4011
|
Computational Analysis of Move Structures in Academic Abstracts
|
This paper introduces a method for computational analysis of move structures in abstracts of research articles. In our approach, sentences in a given abstract are analyzed and labeled with a specific move in light of various rhetorical functions. The method involves automatically gathering a large number of abstracts from the Web and building a language model of abstract moves. We also present a prototype concordancer, CARE, which exploits the move-tagged abstracts for digital learning. This system provides a promising approach to Webbased computer-assisted academic writing.
| true |
[] |
[] |
Industry, Innovation and Infrastructure
| null | null | null |
2006
| false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false |
clemenceau-roche-1993-enhancing
|
https://aclanthology.org/E93-1059
|
Enhancing a large scale dictionary with a two-level system
|
We present in this paper a morphological analyzer and generator for French that contains a dictionary of 700,000 inflected words called DELAF 1, and a full twolevel system aimed at the analysis of new derivatives. Hence, this tool recognizes and generates both correct inflected forms of French simple words (DELAF lookup procedure) and new derivatives and their inflected forms (two-level analysis). Moreover, a clear distinction is made between dictionary look-up processes and new words analyses in order to clearly identify the analyses that involve heuristic rules. We tested this tool upon a French corpus of 1,300,000 words with significant results (Clemenceau D. 1992). With regards to efficiency, since this tool is compiled into a unique transducer, it provides a very fast look-up procedure (1,100 words per second) at a low memory cost (around 1.3 Mb in RAM).
| false |
[] |
[] | null | null | null | null |
1993
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
theeramunkong-etal-1997-exploiting
|
https://aclanthology.org/W97-1511
|
Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement
|
In this paper, we propose a new framework of grammar development and some techniques for exploiting contextual information in a process of grammar refinement. The proposed framework involves two processes, partial grammar acquisition and grammar refinement. In the former process, a rough grammar is constructed from a bracketed corpus. The grammar is later refined by the latter process where a combination of rule-based and corpusbased approaches is applied. Since there may be more than one rules introduced as alternative hypotheses to recover the analysis of sentences which cannot be parsed by the current grammar, we propose a method to give priority to these hypotheses based on local contextual information. By experiments, our hypothesis selection is evaluated and its effectiveness is shown.
| false |
[] |
[] | null | null | null |
We would like to thank the EDR organization for permitting us to access the EDR corpus. Special thanks go to Dr. Ratana Rujiravanit, who helps me to keenly proofread a draft of this paper. We also wish to thank the members in Okumura laboratory at JAIST for their useful comments and their technical supports.
|
1997
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
rasooli-tetreault-2013-joint
|
https://aclanthology.org/D13-1013
|
Joint Parsing and Disfluency Detection in Linear Time
|
We introduce a novel method to jointly parse and detect disfluencies in spoken utterances. Our model can use arbitrary features for parsing sentences and adapt itself with out-ofdomain data. We show that our method, based on transition-based parsing, performs at a high level of accuracy for both the parsing and disfluency detection tasks. Additionally, our method is the fastest for the joint task, running in linear time.
| false |
[] |
[] | null | null | null |
We would like to thank anonymous reviewers for their helpful comments on the paper. Additionally, we were aided by researchers by their prompt responses to our many questions: Mark Core, Luciana Ferrer, Kallirroi Georgila, Mark Johnson, Jeremy Kahn, Yang Liu, Xian Qian, Kenji Sagae, and Wen Wang. Finally, this work was conducted during the first author's summer internship at the Nuance Sunnyvale Research Lab. We would like to thank the researchers in the group for the helpful discussions and assistance on different aspects of the problem. In particular, we would like to thank Chris Brew, Ron Kaplan, Deepak Ramachandran and Adwait Ratnaparkhi.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
delmonte-2016-venseseval
|
https://aclanthology.org/S16-1123
|
VENSESEVAL at Semeval-2016 Task 2 iSTS - with a full-fledged rule-based approach
|
In our paper we present our rule-based system for semantic processing. In particular we show examples and solutions that may be challenge our approach. We then discuss problems and shortcomings of Task 2-iSTS. We comment on the existence of a tension between the inherent need to on the one side, to make the task as much as possible "semantically feasible". Whereas the detailed presentation and some notes in the guidelines refer to inferential processes, paraphrases and the use of commonsense knowledge of the world for the interpretation to work. We then present results and some conclusions.
| false |
[] |
[] | null | null | null | null |
2016
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
shwartz-etal-2020-unsupervised
|
https://aclanthology.org/2020.emnlp-main.373
|
Unsupervised Commonsense Question Answering with Self-Talk
|
Natural language understanding involves reading between the lines with implicit background knowledge. Current systems either rely on pretrained language models as the sole implicit source of world knowledge, or resort to external knowledge bases (KBs) to incorporate additional relevant knowledge. We propose an unsupervised framework based on self-talk as a novel alternative to multiple-choice commonsense tasks. Inspired by inquiry-based discovery learning (Bruner, 1961), our approach inquires language models with a number of information seeking questions such as "what is the definition of ..." to discover additional background knowledge. Empirical results demonstrate that the self-talk procedure substantially improves the performance of zeroshot language model baselines on four out of six commonsense benchmarks, and competes with models that obtain knowledge from external KBs. While our approach improves performance on several benchmarks, the selftalk induced knowledge even when leading to correct answers is not always seen as helpful by human judges, raising interesting questions about the inner-workings of pre-trained language models for commonsense reasoning.
| false |
[] |
[] | null | null | null |
This research was supported in part by NSF (IIS-1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031).
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
berwick-1984-strong
|
https://aclanthology.org/J84-3005
|
Strong Generative Capacity, Weak Generative Capacity, and Modern Linguistic Theories
|
What makes a language a natural language? A longstanding tradition in generative grammar holds that a language is natural just in case it is learnable under a constellation of auxiliary assumptions about input evidence available to children. Yet another approach seeks some key mathematical property that distinguishes the natural languages from all possible symbol-systems. With some exceptions -for example, Chomsky's demonstration that a complete characterization of our grammatical knowledge lies beyond the power of finite state languages -the mathematical approach has not provided clear-cut results. For example, for a variety of reasons we cannot say that the predicate is context-free characterizes all and only the natural languages.
Still another use of mathematical analysis in linguistics has been to diagnose a proposed grammatical formalism as too powerful (allowing too many grammars or languages) rather than as too weak. Such a diagnosis was supposed by some to follow from Peters and Ritchie's demonstration that the theory of transformational grammar as described in Chomsky's Aspects of the Theory of Syntax could specify grammars to generate any recursively enumerable set. For some this demonstration marked a watershed in the formal analysis transformational grammar. One general reaction (not prompted by the Peters and Ritchie result alone) was to turn to other theories of grammar designed to explicitly avoid the problems of a theory that could specify an arbitrary Turing machine computation. The proposals for generalized phrase structure grammar (GPSG) and lexical-functional grammar (LFG) have explicitly emphasized this point. GPSG aims for grammars that generate context-free languages (though there is some recent wavering on this point; see Pullum 1984) ; LFG, for languages that are at worst context-sensitive. Whatever the merits of the arguments for this restriction in terms of weak generative capacity -and they are far from obvious, as discussed at length in Berwick and Weinberg (1983) -one point remains: the switch was prompted by criticism of the nearly two-decades old Aspects theory.
| false |
[] |
[] | null | null | null |
Much of this research has been sparked by collaboration with Amy S. Weinberg.Thanks to her for many discussions on GB theory. Portions of this work have appeared in The Grammatical Basis of Linguistic Perform-Generative Capacity and Linguistic Theory ance.The research has been carried out at the MIT Artificial Intelligence Laboratory.Support for the Laboratory's work comes in part from the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-80-C-0505.
|
1984
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhang-feng-2021-universal
|
https://aclanthology.org/2021.emnlp-main.581
|
Universal Simultaneous Machine Translation with Mixture-of-Experts Wait-k Policy
|
Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.
| false |
[] |
[] | null | null | null |
We thank all the anonymous reviewers for their insightful and valuable comments. This work was supported by National Key R&D Program of China (NO. 2017YFE0192900).
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
angelov-2009-incremental
|
https://aclanthology.org/E09-1009
|
Incremental Parsing with Parallel Multiple Context-Free Grammars
|
Parallel Multiple Context-Free Grammar (PMCFG) is an extension of context-free grammar for which the recognition problem is still solvable in polynomial time. We describe a new parsing algorithm that has the advantage to be incremental and to support PMCFG directly rather than the weaker MCFG formalism. The algorithm is also top-down which allows it to be used for grammar based word prediction.
| false |
[] |
[] | null | null | null | null |
2009
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
anechitei-ignat-2013-multilingual
|
https://aclanthology.org/W13-3110
|
Multilingual summarization system based on analyzing the discourse structure at MultiLing 2013
|
This paper describes the architecture of UAIC 1 's Summarization system participating at MultiLing-2013. The architecture includes language independent text processing modules, but also modules that are adapted for one language or another. In our experiments, the languages under consideration are Bulgarian, German, Greek, English, and Romanian. Our method exploits the cohesion and coherence properties of texts to build discourse structures. The output of the parsing process is used to extract general summaries.
| false |
[] |
[] | null | null | null | null |
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
boussidan-ploux-2011-using
|
https://aclanthology.org/W11-0134
|
Using Topic Salience and Connotational Drifts to Detect Candidates to Semantic Change
|
Semantic change has mostly been studied by historical linguists and typically at the scale of centuries. Here we study semantic change at a finer-grained level, the decade, making use of recent newspaper corpora. We detect semantic change candidates by observing context shifts which can be triggered by topic salience or may be independent from it. To discriminate these phenomena with accuracy, we combine variation filters with a series of indices which enable building a coherent and flexible semantic change detection model. The indices include widely adaptable tools such as frequency counts, co-occurrence patterns and networks, ranks, as well as model-specific items such as a variability and cohesion measure and graphical representations. The research uses ACOM, a co-occurrence based geometrical model, which is an extension of the Semantic Atlas. Compared to other models of semantic representation, it allows for extremely detailed analysis and provides insight as to how connotational drift processes unfold.
| false |
[] |
[] | null | null | null |
This research is supported by the Région Rhône-Alpes, via the Cible Project 2009. Many thanks to Sylvain Lupone, previously engineer at the L2c2 for the tools he developed in this research's framework.
|
2011
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
cooke-1999-interactive
|
https://aclanthology.org/W99-0805
|
Interactive Auditory Demonstrations
|
The subject matter of speech and hearing is packed full of phenomena and processes which lend themselves to or require auditory demonstration. In the past, this has been achieved through passive media such as tape or CD (e.g. Houtsma et ai, 1987; Bregman & Ahad, 1995). The advent of languages such as MATLAB which suppor!s sound handling, modern interface elements and powerful signal processing routines, coupled with the availability of fast processors and ubiquitous soundcards allows tbr a more interactive style of demonstration. A significant effort is now underway in the speech and hearing community to exploit these favourable conditions (see the MATISSE proceedings (1999), for instance).
| false |
[] |
[] | null | null | null |
Demonstrations described here were programmed by Guy Brown, Martin Cooke and Stuart Wrigley (Sheffield, UK) and Dan Ellis (ICSI, Berkeley, USA). Stuart Cunningham and Ljubomir Josifovski helped with the testing. Funding for some of the development work was provided by the ELSNET LE Training Showcase, 98/02.
|
1999
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
vasconcelos-etal-2020-aspect
|
https://aclanthology.org/2020.lrec-1.183
|
Aspect Flow Representation and Audio Inspired Analysis for Texts
|
For better understanding how people write texts, it is fundamental to examine how a particular linguistic aspect (e.g., subjectivity, sentiment, argumentation) is exploited in a text. Analysing such an aspect of a text as a whole (i.e., through a summarised single feature) can lead to significant information loss. In this paper, we propose a novel method of representing and analysing texts that consider how an aspect behaves throughout the text. We represent the texts by aspect flows for capturing all the aspect behaviour. Then, inspired by the resemblance between these flows format and a sound waveform, we fragment them into frames and calculate an adaptation of audio analysis features, named here Audio-Like Features, as a way of analysing the texts. The results of the conducted classification tasks reveal that our approach can surpass methods based on summarised features. We also show that a detailed examination of the Audio-Like Features can lead to a more profound knowledge about the represented texts.
| false |
[] |
[] | null | null | null | null |
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
sultan-etal-2020-importance
|
https://aclanthology.org/2020.acl-main.500
|
On the Importance of Diversity in Question Generation for QA
|
Automatic question generation (QG) has shown promise as a source of synthetic training data for question answering (QA). In this paper we ask: Is textual diversity in QG beneficial for downstream QA? Using top-p nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search. We also show that standard QG evaluation metrics such as BLEU, ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA.
| false |
[] |
[] | null | null | null |
We thank the anonymous reviewers for their valuable feedback.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
li-church-2005-using
|
https://aclanthology.org/H05-1089
|
Using Sketches to Estimate Associations
|
We should not have to look at the entire corpus (e.g., the Web) to know if two words are associated or not. 1 A powerful sampling technique called Sketches was originally introduced to remove duplicate Web pages. We generalize sketches to estimate contingency tables and associations, using a maximum likelihood estimator to find the most likely contingency table given the sample, the margins (document frequencies) and the size of the collection. Not unsurprisingly, computational work and statistical accuracy (variance or errors) depend on sampling rate, as will be shown both theoretically and empirically. Sampling methods become more and more important with larger and larger collections. At Web scale, sampling rates as low as 10 −4 may suffice.
| false |
[] |
[] | null | null | null | null |
2005
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bernardi-etal-2006-multilingual
|
http://www.lrec-conf.org/proceedings/lrec2006/pdf/433_pdf.pdf
|
Multilingual Search in Libraries. The case-study of the Free University of Bozen-Bolzano
|
This paper presents an ongoing project aiming at enhancing the OPAC (Online Public Access Catalog) search system of the Library of the Free University of Bozen-Bolzano with multilingual access. The Multilingual search system (MUSIL), we have developed, integrates advanced linguistic technologies in a user friendly interface and bridges the gap between the world of free text search and the world of conceptual librarian search. In this paper we present the architecture of the system, its interface and preliminary evaluations of the precision of the search results.
| false |
[] |
[] | null | null | null | null |
2006
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
ball-1994-practical
|
https://aclanthology.org/1994.tc-1.10
|
Practical Choices for Hardware and Software
|
The choices we have to make when selecting hardware and software appear very difficult for most of us. The unrelenting rate of change and the torrent of information we are bombarded with is so confusing and intimidating that it can make navigating a traffic-jam in the Parisian rush hour look easy. One of the problems is that when it comes to computers we are all surrounded by well meaning semi-experts-the bloke in the pub, your husband, your children and even minicab drivers. The clue is that these so called experts are usually more interested in demonstrating their skills and playing with the technology than your need to earn a living-they are what I would call computer freaks. So you are being patronised by the computer experts and your desk is a metre deep in computer magazines. In short, you have more raw data than any one person can assimilate in a lifetime. How do you make sense of it all? How do you take sensible decisions? What you need to do is start from some basic principles which follow from answering two simple questions-"Why do you need a computer?" and "Are you going to waste your money?":
| false |
[] |
[] | null | null | null | null |
1994
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
dang-etal-1998-investigating-regular
|
https://aclanthology.org/P98-1046
|
Investigating Regular Sense Extensions based on Intersective Levin Classes
|
In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making generalizations about regular extensions of meaning. Current approaches to English classification, Levin classes and WordNet, have limitations in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersective sets, which are a more fine-grained classification and have more coherent sets of syntactic frames and associated semantic components. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the original Levin classes. We also have begun to examine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties.
| false |
[] |
[] | null | null | null | null |
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
bonheme-grzes-2020-sesam
|
https://aclanthology.org/2020.semeval-1.102
|
SESAM at SemEval-2020 Task 8: Investigating the Relationship between Image and Text in Sentiment Analysis of Memes
|
This paper presents our submission to task 8 (memotion analysis) of the SemEval 2020 competition. We explain the algorithms that were used to learn our models along with the process of tuning the algorithms and selecting the best model. Since meme analysis is a challenging task with two distinct modalities, we studied the impact of different multimodal representation strategies. The results of several approaches to dealing with multimodal data are therefore discussed in the paper. We found that alignment-based strategies did not perform well on memes. Our quantitative results also showed that images and text were uncorrelated. Fusion-based strategies did not show significant improvements and using one modality only (text or image) tends to lead to better results when applied with the predictive models that we used in our research. 2 Related work Sentiment analysis of text is a very active research area which still faces multiple challenges such as irony and humour detection (Hernández Farias and Rosso, 2017) and low inter-annotator agreement caused by the high subjectivity of the content (Mohammad, 2017). Research has been extended to multimodal sentiment analysis during the last years (Soleymani et al., 2017), but the focus was mostly on video and text or speech and text. The specific multi-modality of memes in sentiment analysis has only been addressed recently by French (2017), who investigated their correlation with other comments in online discussions. 1 We use the term meme to refer to internet memes as defined in Davidson (2012). The memes considered in this task are only composed of image and text.
| false |
[] |
[] | null | null | null |
We thank the SemEval-2020 organisers for their time to prepare the data and run the competition, and the reviewers for their insightful comments.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
kanzaki-isahara-2018-building
|
https://aclanthology.org/L18-1376
|
Building a List of Synonymous Words and Phrases of Japanese Compound Verbs
|
We started to construct a database of synonymous expressions of Japanese "Verb + Verb" compounds semi-automatically. Japanese is known to be rich in compound verbs consisting of two verbs joined together. However, we did not have a comprehensive Japanese compound lexicon. Recently a Japanese compound verb lexicon was constructed by the National Institute for Japanese Language and Linguistics(NINJAL)(2013-15). Though it has meanings, example sentences, syntactic patterns and actual sentences from the corpus that they possess, it has no information on relationships with another words, such as synonymous words and phrases. We automatically extracted synonymous expressions of compound verbs from corpus which is "five hundred million Japanese texts gathered from the web" produced by Kawahara et.al. (2006) by using word2vec and cosine similarity and find suitable clusters which correspond to meanings of the compound verbs by using k-means++ and PCA. The automatic extraction from corpus helps humans find not only typical synonyms but also unexpected synonymous words and phrases. Then we manually compile the list of synonymous expressions of Japanese compound verbs by assessing the result and also link it to the "Compound Verb Lexicon" published by NINJAL.
| false |
[] |
[] | null | null | null |
This work was supported by JSPS KAKENHI(Grant-in-Aid for Scientific Research (C) ) Grant Number JP 16K02727.
|
2018
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
lin-etal-2021-contextualized
|
https://aclanthology.org/2021.emnlp-main.77
|
Contextualized Query Embeddings for Conversational Search
|
This paper describes a compact and effective model for low-latency passage retrieval in conversational search based on learned dense representations. Prior to our work, the state-ofthe-art approach uses a multi-stage pipeline comprising conversational query reformulation and information retrieval modules. Despite its effectiveness, such a pipeline often includes multiple neural models that require long inference times. In addition, independently optimizing each module ignores dependencies among them. To address these shortcomings, we propose to integrate conversational query reformulation directly into a dense retrieval model. To aid in this goal, we create a dataset with pseudo-relevance labels for conversational search to overcome the lack of training data and to explore different training strategies. We demonstrate that our model effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets. Finally, after observing that our model learns to adjust the L 2 norm of query token embeddings, we leverage this property for hybrid retrieval and to support error analysis.
| false |
[] |
[] | null | null | null |
This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. Additionally, we would like to thank the support of Cloud TPUs from Google's TPU Research Cloud (TRC).
|
2021
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
yamauchi-etal-2013-robotic
|
https://aclanthology.org/W13-4060
|
A Robotic Agent in a Virtual Environment that Performs Situated Incremental Understanding of Navigational Utterances
|
We demonstrate a robotic agent in a 3D virtual environment that understands human navigational instructions. Such an agent needs to select actions based on not only instructions but also situations. It is also expected to immediately react to the instructions. Our agent incrementally understands spoken instructions and immediately controls a mobile robot based on the incremental understanding results and situation information such as the locations of obstacles and moving history. It can be used as an experimental system for collecting human-robot interactions in dynamically changing situations.
| false |
[] |
[] | null | null | null |
We thank Antoine Raux and Shun Sato for their contribution to building the previous versions of this system. Thanks also go to Timo Baumann Okko Buß, and David Schlangen for making their InproTK available.
|
2013
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
zhang-etal-2010-machine
|
https://aclanthology.org/C10-2165
|
Machine Transliteration: Leveraging on Third Languages
|
This paper presents two pivot strategies for statistical machine transliteration, namely system-based pivot strategy and model-based pivot strategy. Given two independent source-pivot and pivot-target name pair corpora, the model-based strategy learns a direct sourcetarget transliteration model while the system-based strategy learns a sourcepivot model and a pivot-target model, respectively. Experimental results on benchmark data show that the systembased pivot strategy is effective in reducing the high resource requirement of training corpus for low-density language pairs while the model-based pivot strategy performs worse than the system-based one. Language Pairs ACC F-Score MRR MAP_ref MAP_10 MAP_sys English Chinese 0.
| false |
[] |
[] | null | null | null | null |
2010
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
barthelemy-1998-morphological
|
https://aclanthology.org/W98-1010
|
A Morphological Analyzer for Akkadian Verbal Forms with a Model of Phonetic Transformations
|
The paper describes a first attempt to design a morphological analyzer for Akkadian verbal forms. Akkadian is a semitic dead language which was used in the ancient Mesopotamia. The analyzer described has two levels: the first one is a deterministic and unique paradigm that describes the flexion of Akkadian verbs. The second level is a non deterministic rewriting system which describes possible phonetic transformations of the forms. The results obtained so far are encouraging.
| false |
[] |
[] | null | null | null |
The following references, given by one of the referees as relevant to our work, were not used for lack of time.
|
1998
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
park-caragea-2020-scientific
|
https://aclanthology.org/2020.coling-main.472
|
Scientific Keyphrase Identification and Classification by Pre-Trained Language Models Intermediate Task Transfer Learning
|
Scientific keyphrase identification and classification is the task of detecting and classifying keyphrases from scholarly text with their types from a set of predefined classes. This task has a wide range of benefits, but it is still challenging in performance due to the lack of large amounts of labeled data required for training deep neural models. In order to overcome this challenge, we explore pre-trained language models BERT and SciBERT with intermediate task transfer learning, using 42 data-rich related intermediate-target task combinations. We reveal that intermediate task transfer learning on SciBERT induces a better starting point for target task fine-tuning compared with BERT and achieves competitive performance in scientific keyphrase identification and classification compared to both previous works and strong baselines. Interestingly, we observe that BERT with intermediate task transfer learning fails to improve the performance of scientific keyphrase identification and classification potentially due to significant catastrophic forgetting. This result highlights that scientific knowledge achieved during the pre-training of language models on large scientific collections plays an important role in the target tasks. We also observe that sequence tagging related intermediate tasks, especially syntactic structure learning tasks such as POS Tagging, tend to work best for scientific keyphrase identification and classification.
| false |
[] |
[] | null | null | null |
We thank Isabelle Augenstein for several clarifications of the task and the evaluation approach. We also thank our anonymous reviewers for their constructive comments and feedback, which helped improve our paper. This research is supported in part by NSF CAREER award #1802358, NSF CRI award #1823292, and UIC Discovery Partners Institute to Cornelia Caragea. Any opinions, findings, and conclusions expressed here are those of the authors and do not necessarily reflect the views of NSF.
|
2020
| false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.