context
stringlengths
1.67k
46.5k
question
stringlengths
87
854
answer_prefix
stringclasses
5 values
answers
listlengths
1
3
max_new_tokens
int64
52
532
length
int64
360
8.19k
task
stringclasses
8 values
_id
stringlengths
32
32
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: 10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Defining Fake news Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news. With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation. In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization. The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows: Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Characterization We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions. If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant. A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed. With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant. The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant. On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ). Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Discussion As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them: These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier. Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising). Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is their definition of tweets going viral? Only give me the answer and do not output any other words.
Answer:
[ "Viral tweets are the ones that are retweeted more than 1000 times", "those that contain a high number of retweets" ]
276
3,837
single_doc_qa
d135191a848646b583fcfc84091f5554
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4). It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets. The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion. Literature Review Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17). Continuous skipgram predicts (by maximizing classification of) words before and after the center word, for a given range. Since distant words are less connected to a center word in a sentence, less weight is assigned to such distant words in training. CBoW, on the other hand, uses words from the history and future in a sequence, with the objective of correctly classifying the target word in the middle. It works by projecting all history or future words within a chosen window into the same position, averaging their vectors. Hence, the order of words in the history or future does not influence the averaged vector. This is similar to the traditional bag-of-words, which is oblivious of the order of words in its sequence. A log-linear classifier is used in both architectures (BIBREF0). In further work, they extended the model to be able to do phrase representations and subsample frequent words (BIBREF16). Being a NNLM, word2vec assigns probabilities to words in a sequence, like other NNLMs such as feedforward networks or recurrent neural networks (BIBREF15). Earlier models like latent dirichlet allocation (LDA) and latent semantic analysis (LSA) exist and effectively achieve low dimensional vectors by matrix factorization (BIBREF18, BIBREF19). It's been shown that word vectors are beneficial for NLP tasks (BIBREF15), such as sentiment analysis and named entity recognition. Besides, BIBREF0 showed with vector space algebra that relationships among words can be evaluated, expressing the quality of vectors produced from the model. The famous, semantic example: vector("King") - vector("Man") + vector("Woman") $\approx $ vector("Queen") can be verified using cosine distance. Another type of semantic meaning is the relationship between a capital city and its corresponding country. Syntactic relationship examples include plural verbs and past tense, among others. Combination of both syntactic and semantic analyses is possible and provided (totaling over 19,000 questions) as Google analogy test set by BIBREF0. WordSimilarity-353 test set is another analysis tool for word vectors (BIBREF20). Unlike Google analogy score, which is based on vector space algebra, WordSimilarity is based on human expert-assigned semantic similarity on two sets of English word pairs. Both tools rank from 0 (totally dissimilar) to 1 (very much similar or exact, in Google analogy case). A typical artificial neural network (ANN) has very many hyper-parameters which may be tuned. Hyper-parameters are values which may be manually adjusted and include vector dimension size, type of algorithm and learning rate (BIBREF19). BIBREF0 tried various hyper-parameters with both architectures of their model, ranging from 50 to 1,000 dimensions, 30,000 to 3,000,000 vocabulary sizes, 1 to 3 epochs, among others. In our work, we extended research to 3,000 dimensions. Different observations were noted from the many trials. They observed diminishing returns after a certain point, despite additional dimensions or larger, unstructured training data. However, quality increased when both dimensions and data size were increased together. Although BIBREF16 pointed out that choice of optimal hyper-parameter configurations depends on the NLP problem at hand, they identified the most important factors are architecture, dimension size, subsampling rate, and the window size. In addition, it has been observed that variables like size of datasets improve the quality of word vectors and, potentially, performance on downstream tasks (BIBREF21, BIBREF0). Methodology The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch. To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased. Google (semantic and syntactic) analogy tests and WordSimilarity-353 (with Spearman correlation) by BIBREF20 were chosen for intrinsic evaluations. They measure the quality of word vectors. The analogy scores are averages of both semantic and syntactic tests. NER and SA were chosen for extrinsic evaluations. The GMB dataset for NER was trained in an LSTM network, which had an embedding layer for input. The network diagram is shown in fig. FIGREF4. The IMDb dataset for SA was trained in a BiLSTM network, which also used an embedding layer for input. Its network diagram is given in fig. FIGREF4. It includes an additional hidden linear layer. Hyper-parameter details of the two networks for the downstream tasks are given in table TABREF3. The metrics for extrinsic evaluation include F1, precision, recall and accuracy scores. In both tasks, the default pytorch embedding was tested before being replaced by pre-trained embeddings released by BIBREF0 and ours. In each case, the dataset was shuffled before training and split in the ratio 70:15:15 for training, validation (dev) and test sets. Batch size of 64 was used. For each task, experiments for each embedding was conducted four times and an average value calculated and reported in the next section Results and Discussion Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora. Best combination changes when corpus size increases, as will be noticed from table TABREF5. In terms of analogy score, for 10 epochs, w8s0h0 performs best while w8s1h0 performs best in terms of WordSim and corresponding Spearman correlation. Meanwhile, increasing the corpus size to BW, w4s1h0 performs best in terms of analogy score while w8s1h0 maintains its position as the best in terms of WordSim and Spearman correlation. Besides considering quality metrics, it can be observed from table TABREF6 that comparative ratio of values between the models is not commensurate with the results in intrinsic or extrinsic values, especially when we consider the amount of time and energy spent, since more training time results in more energy consumption (BIBREF23). Information on the length of training time for the released Mikolov model is not readily available. However, it's interesting to note that their presumed best model, which was released is also s1h0. Its analogy score, which we tested and report, is confirmed in their paper. It beats our best models in only analogy score (even for Simple Wiki), performing worse in others. This is inspite of using a much bigger corpus of 3,000,000 vocabulary size and 100 billion words while Simple Wiki had vocabulary size of 367,811 and is 711MB. It is very likely our analogy scores will improve when we use a much larger corpus, as can be observed from table TABREF5, which involves just one billion words. Although the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. FIGREF7, decreased only slightly compared to others with increasing dimensions, the increased training time and much larger serialized model size render any possible minimal score advantage over higher dimensions undesirable. As can be observed in fig. FIGREF9, from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW. More becomes worse! This trend is true for all combinations for all tests. Polynomial interpolation may be used to determine the optimal dimension in both corpora. Our models are available for confirmation and source codes are available on github. With regards to NER, most pretrained embeddings outperformed the default pytorch embedding, with our BW w4s1h0 model (which is best in BW analogy score) performing best in F1 score and closely followed by BIBREF0 model. On the other hand, with regards to SA, pytorch embedding outperformed the pretrained embeddings but was closely followed by our SW w8s0h0 model (which also had the best SW analogy score). BIBREF0 performed second worst of all, despite originating from a very huge corpus. The combinations w8s0h0 & w4s0h0 of SW performed reasonably well in both extrinsic tasks, just as the default pytorch embedding did. Conclusion This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19). Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research. The work on this project is partially funded by Vinnova under the project number 2019-02996 "Språkmodeller för svenska myndigheter" Acronyms Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What sentiment analysis dataset is used? Only give me the answer and do not output any other words.
Answer:
[ "IMDb dataset of movie reviews", "IMDb" ]
276
3,210
single_doc_qa
4f8414fbe992402e8acbc5b06a00986b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How are weights dynamically adjusted? Only give me the answer and do not output any other words.
Answer:
[ "One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.", "associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds" ]
276
5,435
single_doc_qa
2b30943f644c4dbdb0a9394f740a8e4a
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14. The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework. GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices. In what follows, we first provide some background about the MP framework (in sec. SECREF2), thoroughly describe and explain MPAD (sec. SECREF3), present our experimental framework (sec. SECREF4), report and interpret our results (sec. SECREF5), and provide a review of the relevant literature (sec. SECREF6). Message Passing Neural Networks BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \in V$. At time $t+1$, a message vector $\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\mathcal {N}(v)$ of $v$: The new representation $\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\mathbf {h}^{t}_v$ with the message vector $\mathbf {m}_v^{t+1}$: Messages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it. If a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied: Next, we present the MP network we developed for document understanding. Message Passing Attention network for Document understanding (MPAD) ::: Word co-occurrence networks We represent a document as a statistical word co-occurrence network BIBREF18, BIBREF19 with a sliding window of size 2 overspanning sentences. Let us denote that graph $G(V,E)$. Each unique word in the preprocessed document is represented by a node in $G$, and an edge is added between two nodes if they are found together in at least one instantiation of the window. $G$ is directed and weighted: edge directions and weights respectively capture text flow and co-occurrence counts. $G$ is a compact representation of its document. In $G$, immediate neighbors are consecutive words in the same sentence. That is, paths of length 2 correspond to bigrams. Paths of length more than 2 can correspond either to traditional $n$-grams or to relaxed $n$-grams, that is, words that never appear in the same sentence but co-occur with the same word(s). Such nodes are linked through common neighbors. Master node. Inspired by BIBREF3, our $G$ also includes a special document node, linked to all other nodes via unit weight bi-directional edges. In what follows, let us denote by $n$ the number of nodes in $G$, including the master node. Message Passing Attention network for Document understanding (MPAD) ::: Message passing We formulate our AGGREGATE function as: where $\mathbf {H}^t \in \mathbb {R}^{n \times d}$ contains node features ($d$ is a hyperparameter), and $\mathbf {A} \in \mathbb {R}^{n \times n}$ is the adjacency matrix of $G$. Since $G$ is directed, $\mathbf {A}$ is asymmetric. Also, $\mathbf {A}$ has zero diagonal as we choose not to consider the feature of the node itself, only that of its incoming neighbors, when updating its representation. Since $G$ is weighted, the $i^{th}$ row of $A$ contains the weights of the edges incoming on node $v_i$. $\mathbf {D} \in \mathbb {R}^{n \times n}$ is the diagonal in-degree matrix of $G$. MLP denotes a multi-layer perceptron, and $\mathbf {M}^{t+1} \in \mathbb {R}^{n \times d}$ is the message matrix. The use of a MLP was motivated by the observation that for graph classification, MP neural nets with 1-layer perceptrons are inferior to their MLP counterparts BIBREF14. Indeed, 1-layer perceptrons are not universal approximators of multiset functions. Note that like in BIBREF14, we use a different MLP at each layer. Renormalization. The rows of $\mathbf {D}^{-1}\mathbf {A}$ sum to 1. This is equivalent to the renormalization trick of BIBREF9, but using only the in-degrees. That is, instead of computing a weighted sum of the incoming neighbors' feature vectors, we compute a weighted average of them. The coefficients are proportional to the strength of co-occurrence between words. One should note that by averaging, we lose the ability to distinguish between different neighborhood structures in some special cases, that is, we lose injectivity. Such cases include neighborhoods in which all nodes have the same representations, and neighborhoods of different sizes containing various representations in equal proportions BIBREF14. As suggested by the results of an ablation experiment, averaging is better than summing in our application (see subsection SECREF30). Note that instead of simply summing/averaging, we also tried using GAT-like attention BIBREF11 in early experiments, without obtaining better results. As far as our COMBINE function, we use the Gated Recurrent Unit BIBREF20, BIBREF21: Omitting biases for readability, we have: where the $\mathbf {W}$ and $\mathbf {U}$ matrices are trainable weight matrices not shared across time steps, $\sigma (\mathbf {x}) = 1/(1+\exp (-\mathbf {x}))$ is the sigmoid function, and $\mathbf {R}$ and $\mathbf {Z}$ are the parameters of the reset and update gates. The reset gate controls the amount of information from the previous time step (in $\mathbf {H}^t$) that should propagate to the candidate representations, $\tilde{\mathbf {H}}^{t+1}$. The new representations $\mathbf {H}^{t+1}$ are finally obtained by linearly interpolating between the previous and the candidate ones, using the coefficients returned by the update gate. Interpretation. Updating node representations through a GRU should in principle allow nodes to encode a combination of local and global signals (low and high values of $t$, resp.), by allowing them to remember about past iterations. In addition, we also explicitly consider node representations at all iterations when reading out (see Eq. DISPLAY_FORM18). Message Passing Attention network for Document understanding (MPAD) ::: Readout After passing messages and performing updates for $T$ iterations, we obtain a matrix $\mathbf {H}^T \in \mathbb {R}^{n \times d}$ containing the final vertex representations. Let $\hat{G}$ be graph $G$ without the special document node, and matrix $\mathbf {\hat{H}}^T \in \mathbb {R}^{(n-1) \times d}$ be the corresponding representation matrix (i.e., $\mathbf {H}^T$ without the row of the document node). We use as our READOUT function the concatenation of self-attention applied to $\mathbf {\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\mathbf {\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\mathbf {\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\mathbf {W}_A^T \in \mathbb {R}^{d \times d}$. An alignment vector $\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\mathbf {Y}^T \in \mathbb {R}^{(n-1) \times d}$ with a trainable vector $\mathbf {v}^T \in \mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\mathbf {u}^T \in \mathbb {R}^d$ as a weighted sum of the final representations $\mathbf {\hat{H}}^T$. Note that we tried with multiple context vectors, i.e., with a matrix $\mathbf {V}^T$ instead of a vector $\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\mathbf {V}^T$. Master node skip connection. $\mathbf {h}_G^T \in \mathbb {R}^{2d}$ is obtained by concatenating $\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation. Multi-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\mathbf {h}_G \in \mathbb {R}^{T \times 2d}$ : In effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features. Message Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\rightarrow $ bigrams $\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described. MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention. MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained. MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document. Experiments ::: Datasets We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21. (1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes). (2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles. (3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes. (4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences). (5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering. (6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie. (7) TREC BIBREF35 consists of questions that are classified into 6 different categories. (8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive). (9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative. (10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge. Experiments ::: Baselines We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants. doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier. CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions). DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax. Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees. DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees. LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations. C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM. SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents. WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used. S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance. Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space. LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector. HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level. Experiments ::: Model configuration and training We preprocess all datasets using the code of BIBREF38. On Yelp2013, we also replace all tokens appearing strictly less than 6 times with a special UNK token, like in BIBREF27. We then build a directed word co-occurrence network from each document, with a window of size 2. We use two MP iterations ($T$=2) for the basic MPAD, and two MP iterations at each level, for the hierarchical variants. We set $d$ to 64, except on IMDB and Yelp on which $d=128$, and use a two-layer MLP. The final graph representations are passed through a softmax for classification. We train MPAD in an end-to-end fashion by minimizing the cross-entropy loss function with the Adam optimizer BIBREF48 and an initial learning rate of 0.001. To regulate potential differences in magnitude, we apply batch normalization after concatenating the feature vector of the master node with the self-attentional vector, that is, after the skip connection (see subsection SECREF16). To prevent overfitting, we use dropout BIBREF49 with a rate of 0.5. We select the best epoch, capped at 200, based on the validation accuracy. When cross-validation is used (see 3rd column of Table TABREF21), we construct a validation set by randomly sampling 10% of the training set of each fold. On all datasets except Yelp2013, we use the publicly available 300-dimensional pre-trained Google News vectors ($D$=300) BIBREF50 to initialize the node representations $\mathbf {H}^0$. On Yelp2013, we follow BIBREF27 and learn our own word vectors from the training and validation sets with the gensim implementation of word2vec BIBREF51. MPAD was implemented in Python 3.6 using the PyTorch library BIBREF52. All experiments were run on a single machine consisting of a 3.4 GHz Intel Core i7 CPU with 16 GB of RAM and an NVidia GeForce Titan Xp GPU. Results and ablations ::: Results Experimental results are shown in Table TABREF28. For the baselines, the best scores reported in each original paper are shown. MPAD reaches best performance on 7 out of 10 datasets, and is close second elsewhere. Moreover, the 7 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, subjectivity), which indicates that MPAD can perform well in different settings. MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents. However, on Subjectivity, standard MPAD outperforms all hierarchical variants. On TREC, it reaches the same accuracy. We hypothesize that in some cases, using a different graph to separately encode each sentence might be worse than using one single graph to directly encode the document. Indeed, in the single document graph, some words that never appear in the same sentence can be connected through common neighbors, as was explained in subsection SECREF7. So, this way, some notion of cross-sentence context is captured while learning representations of words, bigrams, etc. at each MP iteration. This creates better informed representations, resulting in a better document embedding. With the hierarchical variants, on the other hand, each sentence vector is produced in isolation, without any contextual information about the other sentences in the document. Therefore, the final sentence embeddings might be of lower quality, and as a group might also contain redundant/repeated information. When the sentence vectors are finally combined into a document representation, it is too late to take context into account. Results and ablations ::: Ablation studies To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29. Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary. Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation. No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document. No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\mathbf {M}^{t+1} = \textsc {MLP}^{t+1}\big (\mathbf {A}\mathbf {H}^{t}\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents. Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\mathbf {H}^{t+1}$=$\mathbf {M}^{t+1}$. Since $\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood. Related work In what follows, we offer a brief review of relevant studies, ranked by increasing order of similarity with our work. BIBREF9, BIBREF54, BIBREF11, BIBREF10 conduct some node classification experiments on citation networks, where nodes are scientific papers, i.e., textual data. However, text is only used to derive node feature vectors. The external graph structure, which plays a central role in determining node labels, is completely unrelated to text. On the other hand, BIBREF55, BIBREF7 experiment on traditional document classification tasks. They both build $k$-nearest neighbor similarity graphs based on the Gaussian diffusion kernel. More precisely, BIBREF55 build one single graph where nodes are documents and distance is computed in the BoW space. Node features are then used for classification. Closer to our work, BIBREF7 represent each document as a graph. All document graphs are derived from the same underlying structure. Only node features, corresponding to the entries of the documents' BoW vectors, vary. The underlying, shared structure is that of a $k$-NN graph where nodes are vocabulary terms and similarity is the cosine of the word embedding vectors. BIBREF7 then perform graph classification. However they found performance to be lower than that of a naive Bayes classifier. BIBREF56 use a GNN for hierarchical classification into a large taxonomy of topics. This task differs from traditional document classification. The authors represent documents as unweighted, undirected word co-occurrence networks with word embeddings as node features. They then use the spatial GNN of BIBREF15 to perform graph classification. The work closest to ours is probably that of BIBREF53. The authors adopt the semi-supervised node classification approach of BIBREF9. They build one single undirected graph from the entire dataset, with both word and document nodes. Document-word edges are weighted by TF-IDF and word-word edges are weighted by pointwise mutual information derived from co-occurrence within a sliding window. There are no document-document edges. The GNN is trained based on the cross-entropy loss computed only for the labeled nodes, that is, the documents in the training set. When the final node representations are obtained, one can use that of the test documents to classify them and evaluate prediction performance. There are significant differences between BIBREF53 and our work. First, our approach is inductive, not transductive. Indeed, while the node classification approach of BIBREF53 requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by BIBREF53. Finally, the approach of BIBREF53 requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size. Conclusion We have proposed an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). Experiments conducted on 10 standard text classification datasets show that our architecture is competitive with the state-of-the-art. By processing weighted, directed word co-occurrence networks, MPAD is sensitive to word order and word-word relationship strength. To explicitly capture the hierarchical structure of documents, we also propose three hierarchical variants of MPAD, that we show bring improvements over the vanilla architecture. Acknowledgments We thank the NVidia corporation for the donation of a GPU as part of their GPU grant program. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which component is the least impactful? Only give me the answer and do not output any other words.
Answer:
[ "Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets." ]
276
6,132
single_doc_qa
005cfff7a79348379cd33deb0586925a
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are method's improvements of F1 for NER task for English and Chinese datasets? Only give me the answer and do not output any other words.
Answer:
[ "English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively, Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively", "For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively., huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively" ]
276
5,435
single_doc_qa
b5cfce0a99a941bf9071c32c7f0e99ef
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD BIBREF0 , MS MARCO BIBREF1 , and TriviaQA BIBREF2 . Consequently, many pioneering MRC models have been proposed, such as BiDAF BIBREF3 , R-NET BIBREF4 , and QANet BIBREF5 . According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings? OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings. A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passage-question pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet BIBREF7 storing semantic knowledge, ConceptNet BIBREF8 storing commonsense knowledge, and Freebase BIBREF9 storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words BIBREF10 , BIBREF11 . However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge. The contribution of this paper is two-fold. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset ( INLINEFORM0 – INLINEFORM1 ) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise. Data Enrichment Method In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model. Semantic Relation Chain WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain. Inter-word Semantic Connection The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word INLINEFORM0 , whose synsets are represented as a set INLINEFORM1 , we use another set INLINEFORM2 to represent its extended synsets, which includes all the synsets that are in INLINEFORM3 or that can be linked to from INLINEFORM4 through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, INLINEFORM5 will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter INLINEFORM6 to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than INLINEFORM7 hops can be used to construct INLINEFORM8 so that INLINEFORM9 becomes a function of INLINEFORM10 : INLINEFORM11 (if INLINEFORM12 , we will have INLINEFORM13 ). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word INLINEFORM14 is semantically connected to another word INLINEFORM15 if and only if INLINEFORM16 . General Knowledge Extraction Given a passage-question pair, the inter-word semantic connections that connect any word to any passage word are regarded as the general knowledge we need to extract. Considering the requirements of our MRC model, we only extract the positional information of such inter-word semantic connections. Specifically, for each word INLINEFORM0 , we extract a set INLINEFORM1 , which includes the positions of the passage words that INLINEFORM2 is semantically connected to (if INLINEFORM3 itself is a passage word, we will exclude its own position from INLINEFORM4 ). We can control the amount of the extracted results by setting the hyper-parameter INLINEFORM5 : if we set INLINEFORM6 to 0, inter-word semantic connections will only exist between synonyms; if we increase INLINEFORM7 , inter-word semantic connections will exist between more words. That is to say, by increasing INLINEFORM8 within a certain range, we can usually extract more inter-word semantic connections from a passage-question pair, and thus can provide the MRC model with more general knowledge. However, due to the complexity and diversity of natural languages, only a part of the extracted results can serve as useful general knowledge, while the rest of them are useless for the prediction of answer spans, and the proportion of the useless part always rises when INLINEFORM9 is set larger. Therefore we set INLINEFORM10 through cross validation (i.e. according to the performance of the MRC model on the development examples). Knowledge Aided Reader In this section, we elaborate our MRC model: Knowledge Aided Reader (KAR). The key components of most existing MRC models are their attention mechanisms BIBREF13 , which are aimed at fusing the associated representations of each given passage-question pair. These attention mechanisms generally fall into two categories: the first one, which we name as mutual attention, is aimed at fusing the question representations into the passage representations so as to obtain the question-aware passage representations; the second one, which we name as self attention, is aimed at fusing the question-aware passage representations into themselves so as to obtain the final passage representations. Although KAR is equipped with both categories, its most remarkable feature is that it explicitly uses the general knowledge extracted by the data enrichment method to assist its attention mechanisms. Therefore we separately name the attention mechanisms of KAR as knowledge aided mutual attention and knowledge aided self attention. Task Definition Given a passage INLINEFORM0 and a relevant question INLINEFORM1 , the task is to predict an answer span INLINEFORM2 , where INLINEFORM3 , so that the resulting subsequence INLINEFORM4 from INLINEFORM5 is an answer to INLINEFORM6 . Overall Architecture As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers: Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe BIBREF14 word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) BIBREF15 . For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is INLINEFORM0 . Therefore we obtain the passage lexicon embeddings INLINEFORM1 and the question lexicon embeddings INLINEFORM2 . Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. For both the passage and the question, we process the lexicon embeddings (i.e. INLINEFORM0 for the passage and INLINEFORM1 for the question) with a shared bidirectional LSTM (BiLSTM) BIBREF16 , whose hidden state dimensionality is INLINEFORM2 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings INLINEFORM3 and the question context embeddings INLINEFORM4 . Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse INLINEFORM0 into INLINEFORM1 , the outputs of which are represented as INLINEFORM2 . Then we process INLINEFORM3 with a BiLSTM, whose hidden state dimensionality is INLINEFORM4 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories INLINEFORM5 , which are the question-aware passage representations. Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse INLINEFORM0 into themselves, the outputs of which are represented as INLINEFORM1 . Then we process INLINEFORM2 with a BiLSTM, whose hidden state dimensionality is INLINEFORM3 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories INLINEFORM4 , which are the final passage representations. Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution INLINEFORM0 : INLINEFORM1 INLINEFORM2 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the refined memory of each passage word INLINEFORM4 (i.e. the INLINEFORM5 -th column in INLINEFORM6 ); INLINEFORM7 represents the question summary obtained by performing an attention pooling over INLINEFORM8 . Then we obtain the answer end position distribution INLINEFORM9 : INLINEFORM10 INLINEFORM11 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents vector concatenation. Finally we construct an answer span prediction matrix INLINEFORM4 , where INLINEFORM5 represents the upper triangular matrix of a matrix INLINEFORM6 . Therefore, for the training, we minimize INLINEFORM7 on each training example whose labeled answer span is INLINEFORM8 ; for the inference, we separately take the row index and column index of the maximum element in INLINEFORM9 as INLINEFORM10 and INLINEFORM11 . Knowledge Aided Mutual Attention As a part of the coarse memory layer, knowledge aided mutual attention is aimed at fusing the question context embeddings INLINEFORM0 into the passage context embeddings INLINEFORM1 , where the key problem is to calculate the similarity between each passage context embedding INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) and each question context embedding INLINEFORM5 (i.e. the INLINEFORM6 -th column in INLINEFORM7 ). To solve this problem, BIBREF3 proposed a similarity function: INLINEFORM8 where INLINEFORM0 is a trainable parameter; INLINEFORM1 represents element-wise multiplication. This similarity function has also been adopted by several other works BIBREF17 , BIBREF5 . However, since context embeddings contain high-level information, we believe that introducing the pre-extracted general knowledge into the calculation of such similarities will make the results more reasonable. Therefore we modify the above similarity function to the following form: INLINEFORM2 where INLINEFORM0 represents the enhanced context embedding of a word INLINEFORM1 . We use the pre-extracted general knowledge to construct the enhanced context embeddings. Specifically, for each word INLINEFORM2 , whose context embedding is INLINEFORM3 , to construct its enhanced context embedding INLINEFORM4 , first recall that we have extracted a set INLINEFORM5 , which includes the positions of the passage words that INLINEFORM6 is semantically connected to, thus by gathering the columns in INLINEFORM7 whose indexes are given by INLINEFORM8 , we obtain the matching context embeddings INLINEFORM9 . Then by constructing a INLINEFORM10 -attended summary of INLINEFORM11 , we obtain the matching vector INLINEFORM12 (if INLINEFORM13 , which makes INLINEFORM14 , we will set INLINEFORM15 ): INLINEFORM16 INLINEFORM17 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the INLINEFORM4 -th column in INLINEFORM5 . Finally we pass the concatenation of INLINEFORM6 and INLINEFORM7 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM8 . Therefore we obtain the enhanced context embedding INLINEFORM9 . Based on the modified similarity function and the enhanced context embeddings, to perform knowledge aided mutual attention, first we construct a knowledge aided similarity matrix INLINEFORM0 , where each element INLINEFORM1 . Then following BIBREF5 , we construct the passage-attended question summaries INLINEFORM2 and the question-attended passage summaries INLINEFORM3 : INLINEFORM4 INLINEFORM5 where INLINEFORM0 represents softmax along the row dimension and INLINEFORM1 along the column dimension. Finally following BIBREF17 , we pass the concatenation of INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM6 . Therefore we obtain the outputs INLINEFORM7 . Knowledge Aided Self Attention As a part of the refined memory layer, knowledge aided self attention is aimed at fusing the coarse memories INLINEFORM0 into themselves. If we simply follow the self attentions of other works BIBREF4 , BIBREF18 , BIBREF19 , BIBREF17 , then for each passage word INLINEFORM1 , we should fuse its coarse memory INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) with the coarse memories of all the other passage words. However, we believe that this is both unnecessary and distracting, since each passage word has nothing to do with many of the other passage words. Thus we use the pre-extracted general knowledge to guarantee that the fusion of coarse memories for each passage word will only involve a precise subset of the other passage words. Specifically, for each passage word INLINEFORM5 , whose coarse memory is INLINEFORM6 , to perform the fusion of coarse memories, first recall that we have extracted a set INLINEFORM7 , which includes the positions of the other passage words that INLINEFORM8 is semantically connected to, thus by gathering the columns in INLINEFORM9 whose indexes are given by INLINEFORM10 , we obtain the matching coarse memories INLINEFORM11 . Then by constructing a INLINEFORM12 -attended summary of INLINEFORM13 , we obtain the matching vector INLINEFORM14 (if INLINEFORM15 , which makes INLINEFORM16 , we will set INLINEFORM17 ): INLINEFORM18 INLINEFORM19 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. Finally we pass the concatenation of INLINEFORM3 and INLINEFORM4 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM5 . Therefore we obtain the fusion result INLINEFORM6 , and further the outputs INLINEFORM7 . Related Works Attention Mechanisms. Besides those mentioned above, other interesting attention mechanisms include performing multi-round alignment to avoid the problems of attention redundancy and attention deficiency BIBREF20 , and using mutual attention as a skip-connector to densely connect pairwise layers BIBREF21 . Data Augmentation. It is proved that properly augmenting training examples can improve the performance of MRC models. For example, BIBREF22 trained a generative model to generate questions based on unlabeled text, which substantially boosted their performance; BIBREF5 trained a back-and-forth translation model to paraphrase training examples, which brought them a significant performance gain. Multi-step Reasoning. Inspired by the fact that human beings are capable of understanding complex documents by reading them over and over again, multi-step reasoning was proposed to better deal with difficult MRC tasks. For example, BIBREF23 used reinforcement learning to dynamically determine the number of reasoning steps; BIBREF19 fixed the number of reasoning steps, but used stochastic dropout in the output layer to avoid step bias. Linguistic Embeddings. It is both easy and effective to incorporate linguistic embeddings into the input layer of MRC models. For example, BIBREF24 and BIBREF19 used POS embeddings and NER embeddings to construct their input embeddings; BIBREF25 used structural embeddings based on parsing trees to constructed their input embeddings. Transfer Learning. Several recent breakthroughs in MRC benefit from feature-based transfer learning BIBREF26 , BIBREF27 and fine-tuning-based transfer learning BIBREF28 , BIBREF29 , which are based on certain word-level or sentence-level models pre-trained on large external corpora in certain supervised or unsupervised manners. Experimental Settings MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage. Implementation Details. We tokenize the MRC dataset with spaCy 2.0.13 BIBREF30 , manipulate WordNet 3.0 with NLTK 3.3, and implement KAR with TensorFlow 1.11.0 BIBREF31 . For the data enrichment method, we set the hyper-parameter INLINEFORM0 to 3. For the dense layers and the BiLSTMs, we set the dimensionality unit INLINEFORM1 to 600. For model optimization, we apply the Adam BIBREF32 optimizer with a learning rate of INLINEFORM2 and a mini-batch size of 32. For model evaluation, we use Exact Match (EM) and F1 score as evaluation metrics. To avoid overfitting, we apply dropout BIBREF33 to the dense layers and the BiLSTMs with a dropout rate of INLINEFORM3 . To boost the performance, we apply exponential moving average with a decay rate of INLINEFORM4 . Model Comparison in both Performance and the Robustness to Noise We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table TABREF12 , on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the state-of-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR. To demonstrate the advantage of our explicit way to utilize general knowledge over the existing implicit way, we compare the performance of KAR with that reported by BIBREF10 , which used an encoding-based method to utilize the general knowledge dynamically retrieved from Wikipedia and ConceptNet. Since their best model only achieved an EM of INLINEFORM0 and an F1 score of INLINEFORM1 on the development set, which is much lower than the performance of KAR, we have good reason to believe that our explicit way works better than the existing implicit way. Model Comparison in the Hunger for Data We compare KAR with other MRC models in the hunger for data. Specifically, instead of using all the training examples, we produce several training subsets (i.e. subsets of the training examples) so as to study the relationship between the proportion of the available training examples and the performance. We produce each training subset by sampling a specific number of questions from all the questions relevant to each passage. By separately sampling 1, 2, 3, and 4 questions on each passage, we obtain four training subsets, which separately contain INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 of the training examples. As shown in Figure FIGREF15 , with KAR, SAN (re-implemented), and QANet (re-implemented without data augmentation) trained on these training subsets, we evaluate their performance on the development set, and find that KAR performs much better than SAN and QANet. As shown in Figure FIGREF16 and Figure FIGREF17 , with the above KAR, SAN, and QANet trained on the same training subsets, we also evaluate their performance on the adversarial sets, and still find that KAR performs much better than SAN and QANet. That is to say, when only a subset of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise. Analysis According to the experimental results, KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. The reasons for these achievements, we believe, are as follows: Conclusion In this paper, we innovatively integrate the neural networks of MRC models with the general knowledge of human beings. Specifically, inter-word semantic connections are first extracted from each given passage-question pair by a WordNet-based data enrichment method, and then provided as general knowledge to an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the general knowledge to assist its attention mechanisms. Experimental results show that KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. In the future, we plan to use some larger knowledge bases, such as ConceptNet and Freebase, to improve the quality and scope of the general knowledge. Acknowledgments This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Do the authors hypothesize that humans' robustness to noise is due to their general knowledge? Only give me the answer and do not output any other words.
Answer:
[ "Yes", "Yes" ]
276
5,563
single_doc_qa
1ff968c9cf834b77b0090c21eb193aff
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The automatic identification, extraction and representation of the information conveyed in texts is a key task nowadays. In fact, this research topic is increasing its relevance with the exponential growth of social networks and the need to have tools that are able to automatically process them BIBREF0. Some of the domains where it is more important to be able to perform this kind of action are the juridical and legal ones. Effectively, it is crucial to have the capability to analyse open access text sources, like social nets (Twitter and Facebook, for instance), blogs, online newspapers, and to be able to extract the relevant information and represent it in a knowledge base, allowing posterior inferences and reasoning. In the context of this work, we will present results of the R&D project Agatha, where we developed a pipeline of processes that analyses texts (in Portuguese, Spanish, or English) and is able to populate a specialized ontology BIBREF1 (related to criminal law) for the representation of events, depicted in such texts. Events are represented by objects having associated actions, agents, elements, places and time. After having populated the event ontology, we have an automatic process linking the identified entities to external referents, creating, this way, a linked data knowledge base. It is important to point out that, having the text information represented in an ontology allows us to perform complex queries and inferences, which can detect patterns of typical criminal actions. Another axe of innovation in this research is the development, for the Portuguese language, of a pipeline of Natural Language Processing (NLP) processes, that allows us to fully process sentences and represent their content in an ontology. Although there are several tools for the processing of the Portuguese language, the combination of all these steps in a integrated tool is a new contribution. Moreover, we have already explored other related research path, namely author profiling BIBREF2, aggression identification BIBREF3 and hate-speech detection BIBREF4 over social media, plus statute law retrieval and entailment for Japanese BIBREF5. The remainder of this paper is organized as follows: Section SECREF2 describes our proposed architecture together with the Portuguese modules for its computational processing. Section SECREF3 discusses different design options and Section SECREF4 provides our conclusions together with some pointers for future work. Framework for Processing Portuguese Text The framework for processing Portuguese texts is depicted in Fig. FIGREF2, which illustrates how relevant pieces of information are extracted from the text. Namely, input files (Portuguese texts) go through a series of modules: part-of-speech tagging, named entity recognition, dependency parsing, semantic role labeling, subject-verb-object triple extraction, and lexicon matching. The main goal of all the modules except lexicon matching is to identify events given in the text. These events are then used to populate an ontology. The lexicon matching, on the other hand, was created to link words that are found in the text source with the data available not only on Eurovoc BIBREF6 thesaurus but also on the EU's terminology database IATE BIBREF7 (see Section SECREF12 for details). Most of these modules are deeply related and are detailed in the subsequent subsections. Framework for Processing Portuguese Text ::: Part-Of-Speech Tagging Part-of-speech tagging happens after language detection. It labels each word with a tag that indicates its syntactic role in the sentence. For instance, a word could be a noun, verb, adjective or adverb (or other syntactic tag). We used Freeling BIBREF8 library to provide the tags. This library resorts to a Hidden Markov Model as described by Brants BIBREF9. The end result is a tag for each word as described by the EAGLES tagset . Framework for Processing Portuguese Text ::: Named Entity Recognition We use the named entity recognition module after part-of-speech tagging. This module labels each part of the sentence into different categories such as "PERSON", "LOCATION", or "ORGANIZATION". We also used Freeling to label the named entities and the details of the algorithm are shown in the paper by Carreras et al BIBREF10. Aside from the three aforementioned categories, we also extracted "DATE/TIME" and "CURRENCY" values by looking at the part-of-speech tags: date/time words have a tag of "W", while currencies have "Zm". Framework for Processing Portuguese Text ::: Dependency Parsing Dependency parsing involves tagging a word based on different features to indicate if it is dependent on another word. The Freeling library also has dependency parsing models for Portuguese. Since we wanted to build a SRL (Semantic Role Labeling) module on top of the dependency parser and the current released version of Freeling does not have an SRL module for Portuguese, we trained a different Portuguese dependency parsing model that was compatible (in terms of used tags) with the available annotated. We used the dataset from System-T BIBREF11, which has SRL tags, as well as, the other preceding tags. It was necessary to do some pre-processing and tag mapping in order to make it viable to train a Portuguese model. We made 589 tag conversions over 14 different categories. The breakdown of tag conversions per category is given by table TABREF7. These rules can be further seen in the corresponding Github repository BIBREF12. The modified training and development datasets are also available on another Github repositorypage BIBREF13 for further research and comparison purposes. Framework for Processing Portuguese Text ::: Semantic Role Labeling We execute the SRL (Semantic Role Labeling) module after obtaining the word dependencies. This module aims at giving a semantic role to a syntactic constituent of a sentence. The semantic role is always in relation to a verb and these roles could either be an actor, object, time, or location, which are then tagged as A0, A1, AM-TMP, AM-LOC, respectively. We trained a model for this module on top of the dependency parser described in the previous subsection using the modified dataset from System-T. The module also needs co-reference resolution to work and, to achieve this, we adapted the Spanish co-reference modules for Portuguese, changing the words that are equivalent (in total, we changed 253 words). Framework for Processing Portuguese Text ::: SVO Extraction From the yield of the SRL (Semantic Role Labeling) module, our framework can distinguish actors, actions, places, time and objects from the sentences. Utilizing this extracted data, we can distinguish subject-verb-object (SVO) triples using the SVO extraction algorithm BIBREF14. The algorithm finds, for each sentence, the verb and the tuples related to that verb using Semantic Role Labeling (subsection SECREF8). After the extraction of SVOs from texts, they are inserted into a specific event ontology (see section SECREF12 for the creation of a knowledge base). Framework for Processing Portuguese Text ::: Lexicon Matching The sole purpose of this module is to find important terms and/or concepts from the extracted text. To do this, we use Euvovoc BIBREF6, a multilingual thesaurus that was developed for and by the European Union. The Euvovoc has 21 fields and each field is further divided into a variable number of micro-thesauri. Here, due to the application of this work in the Agatha project (mentioned in Section SECREF1), we use the terms of the criminal law BIBREF15 micro-thesaurus. Further, we classified each term of the criminal law micro-thesaurus into four categories namely, actor, event, place and object. The term classification can be seen in Table TABREF11. After the classification of these terms, we implemented two different matching algorithms between the extracted words and the criminal law micro-thesaurus terms. The first is an exact string match wherein lowercase equivalents of the words of the input sentences are matched exactly with lower case equivalents of the predefined terms. The second matching algorithm uses Levenshtein distance BIBREF16, allowing some near-matches that are close enough to the target term. Framework for Processing Portuguese Text ::: Linked Data: Ontology, Thesaurus and Terminology In the computer science field, an ontology can be defined has: a formal specification of a conceptualization; shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations; the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories. A knowledge base is one kind of repository typically used to store answers to questions or solutions to problems enabling rapid search, retrieval, and reuse, either by an ontology or directly by those requesting support. For a more detailed description of ontologies and knowledge bases, see for instance BIBREF17. For designing the ontology adequate for our goals, we referred to the Simple Event Model (SEM) BIBREF18 as a baseline model. A pictorial representation of this ontology is given in Figure FIGREF16 Considering the criminal law domain case study, we made a few changes to the original SEM ontology. The entities of the model are: Actor: person involved with event Place: location of the event Time: time of the event Object: that actor act upon Organization: organization involved with event Currency: money involved with event The proposed ontology was designed in such a manner that it can incorporate information extracted from multiple documents. In this context, suppose that the source of documents is aare a legal police department, where each document isare under the hood of a particular case/crime; furthermoreFurther, a single case can have documents from multiple languages. Now, considering case 1 has 100 documents and case 2 has 100 documents then there is not only a connection among the documents of a single case but rather among all the cases with all the combined 200 documents. In this way, the proposed method is able to produce a detailed and well-connected knowledge base. Figure FIGREF23 shows the proposed ontology, which, in our evaluation procedure, was populated with 3121 events entries from 51 documents. Protege BIBREF19 tool was used for creating the ontology and GraphDB BIBREF20 for populating & querying the data. GraphDB is an enterprise-ready Semantic Graph Database, compliant with W3C Standards. Semantic Graph Databases (also called RDF triplestores) provide the core infrastructure for solutions where modeling agility, data integration, relationship exploration, and cross-enterprise data publishing and consumption are important. GraphDB has a SPARQL (SQL-like query language) interface for RDF graph databases with the following types: SELECT: returns tabular results CONSTRUCT: creates a new RDF graph based on query results ASK: returns "YES", if the query has a solution, otherwise "NO" DESCRIBE: returns RDF data about a resource. This is useful when the RDF data structure in the data source is not known INSERT: inserts triples into a graph DELETE: deletes triples from a graph Furthermore, we have extended the ontology BIBREF21 to connect the extracted terms with Eurovoc criminal law (discussed in subsection SECREF10) and IATE BIBREF7 terms. IATE (Interactive Terminology for Europe) is the EU's general terminology database and its aim is to provide a web-based infrastructure for all EU terminology resources, enhancing the availability and standardization of the information. The extended ontology has a number of sub-classes for Actor, Event, Object and Place classes detailed in Table TABREF30. Discussion We have defined a major design principle for our architecture: it should be modular and not rely on human made rules allowing, as much as possible, its independence from a specific language. In this way, its potential application to another language would be easier, simply by changing the modules or the models of specific modules. In fact, we have explored the use of already existing modules and adopted and integrated several of these tools into our pipeline. It is important to point out that, as far as we know, there is no integrated architecture supporting the full processing pipeline for the Portuguese language. We evaluated several systems like Rembrandt BIBREF22 or LinguaKit: the former only has the initial steps of our proposal (until NER) and the later performed worse than our system. This framework, developed within the context of the Agatha project (described in Section SECREF1) has the full processing pipeline for Portuguese texts: it receives sentences as input and outputs ontological information: a) first performs all NLP typical tasks until semantic role labelling; b) then, it extracts subject-verb-object triples; c) and, then, it performs ontology matching procedures. As a final result, the obtained output is inserted into a specialized ontology. We are aware that each of the architecture modules can, and should, be improved but our main goal was the creation of a full working text processing pipeline for the Portuguese language. Conclusions and Future Work Besides the end–to–end NLP pipeline for the Portuguese language, the other main contributions of this work can be summarize as follows: Development of an ontology for the criminal law domain; Alignment of the Eurovoc thesaurus and IATE terminology with the ontology created; Representation of the extracted events from texts in the linked knowledge base defined. The obtained results support our claim that the proposed system can be used as a base tool for information extraction for the Portuguese language. Being composed by several modules, each of them with a high level of complexity, it is certain that our approach can be improved and an overall better performance can be achieved. As future work we intend, not only to continue improving the individual modules, but also plan to extend this work to the: automatic creation of event timelines; incorporation in the knowledge base of information obtained from videos or pictures describing scenes relevant to criminal investigations. Acknowledgments The authors would like to thank COMPETE 2020, PORTUGAL 2020 Program, the European Union, and ALENTEJO 2020 for supporting this research as part of Agatha Project SI & IDT number 18022 (Intelligent analysis system of open of sources information for surveillance/crime control). The authors would also like to thank LISP - Laboratory of Informatics, Systems and Parallelism. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Were any of the pipeline components based on deep learning models? Only give me the answer and do not output any other words.
Answer:
[ "No", "No" ]
276
2,944
single_doc_qa
1197611f86cb48a48d2af9545bdf1c53
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Introduction} Underwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\protect\footnote{Underwater Robot Professional Contest: {\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest. Therefore, researchers \cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways, \begin{table}[t] \renewcommand\tabcolsep{3.5pt} \caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \emph{3} in Class means three types of creatures are labeled, \emph{i.e.,} holothurian, echinus, and scallop. \emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.} \centering \begin{tabular}{|l|c|c|c|c|c|} \hline Dataset&Train&Test&Class&Retention&Year \\ \hline URPC2017&17,655&985*&3&15\%&2017 \\ \hline URPC2018&2,901&800*&4&99\%&2018 \\ \hline URPC2019&4,757&1,029*&4&86\%&2019 \\ \hline URPC2020$_{ZJ}$&5,543&2,000*&4&82\%&2020 \\ \hline URPC2020$_{DL}$&6,575&2,400*&4&80\%&2020 \\ \hline UDD&1,827&400&3&84\%&2020 \\ \hline \end{tabular} \label{Info} \end{table} \begin{figure*}[htbp] \begin{center} \includegraphics[width=1\linewidth]{example.pdf} \end{center} \caption{Examples in DUO, which show a variety of scenarios in underwater environments.} \label{exam} \end{figure*} causing there is no unified benchmark to compare the performance of different algorithms. In terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance. For other URPC datasets, the latter also includes images from the former, \emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness. Therefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development. To address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\emph{i.e.,} holothurian, echinus, scallop, and starfish). Besides, based on the MMDetection$\protect\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\bf https://github.com/open-mmlab/mmdetection}}$ \cite{chen2019mmdetection} framework, we also provide a \emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\protect\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon. In summary, the contributions of this paper can be listed as follows. $\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes. $\bullet$ We provide a corresponding benchmark of \emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications. \pagestyle{empty} \section{Background} In the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\protect\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping. The datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \ref{Info}. {\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical. {\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\times$480, 704$\times$576, 720$\times$405, and 1,920$\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment. {\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests. {\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf UDD \cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{pie.pdf} \end{center} \caption{The proportion distribution of the objects in DUO.} \label{pie} \end{figure} \begin{figure*} \centering \subfigure[]{\includegraphics[width=3.45in]{imagesize.pdf}} \subfigure[]{\includegraphics[width=3.45in]{numInstance.pdf}} \caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.} \label{sum} \end{figure*} \section{Proposed Dataset} \subsection{Image Deduplicating} As we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario. After deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\%, which means that there are only a few similar images in the new dataset. Figure \ref{exam} shows that our dataset also retains various underwater scenes. \subsection{Image Re-annotation} Due to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\emph{i.e.,} GFL \cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\bf the fine annotation}. Notably, we adopt the COCO \cite{Belongie2014} annotation form as the final format. \subsection{Dataset Statistics} {\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities. {\bf The distribution of instance sizes}: Figure \ref{sum}(a) shows an instance size distribution of DUO. \emph{Percent of image size} represents the ratio of object area to image area, and \emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\% to 1.5\% of the image area. {\bf The instance number per image}: Figure \ref{sum}(b) illustrates the number of categories per image for DUO. \emph{Number of instances} represents the number of objects one image has, and \emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image. {\bf Summary}: In general, smaller objects are harder to detect. For PASCAL VOC \cite{Everingham2007The} or COCO \cite{Belongie2014}, roughly 50\% of all objects occupy no more than 10\% of the image itself, and others evenly occupy from 10\% to 100\%. In the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\% of the image size. Therefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking. \section{Benchmark} Because the aim of underwater object detection for robot picking is to find {\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \ref{ben}. \subsection{Evaluation Metrics} Here we adopt the standard COCO metrics (mean average precision, \emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution. {\bf AP} -- mAP at IoU=0.50:0.05:0.95. {\bf AP$_{50}$} -- mAP at IoU=0.50. {\bf AP$_{75}$} -- mAP at IoU=0.75. {\bf AP$_{S}$} -- {\bf AP} for small objects of area smaller than 32$^{2}$. {\bf AP$_{M}$} -- {\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$. {\bf AP$_{L}$} -- {\bf AP} for large objects of area bigger than 96$^{2}$. {\bf AP$_{Ho}$} -- {\bf AP} in holothurian. {\bf AP$_{Ec}$} -- {\bf AP} in echinus. {\bf AP$_{Sc}$} -- {\bf AP} in scallop. {\bf AP$_{St}$} -- {\bf AP} in starfish. For the efficiency evaluation, we provide three metrics: {\bf Param.} -- The parameters of a detector. {\bf FLOPs} -- Floating-point operations per second. {\bf FPS} -- Frames per second. Notably, {\bf FLOPs} is calculated under the 512$\times$512 input image size and {\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\_$30W$\_$ALL. \subsection{Standard Training Configuration} We follow a widely used open-source toolbox, \emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows: $\bullet$ We initialize the backbone models (\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \cite{Deng2009ImageNet}. $\bullet$ We resize each image into 512 $\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training. $\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively. $\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs. $\bullet$ Testing time augmentation (\emph{i.e.,} flipping test or multi-scale testing) is not employed. \subsection{Benchmark Analysis} Table \ref{ben} shows the benchmark for the \emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency. In general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency. \begin{table*}[htbp] \renewcommand\tabcolsep{3.0pt} \begin{center} \caption{Benchmark of \emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.} \label{ben} \begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|} \hline Method&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\ \hline \emph{multi-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{Faster R-CNN \cite{Ren2015Faster}} &R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\ &R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\ &R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\ \hline \multirow{3}{*}{Cascade R-CNN \cite{Cai_2019}} &R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\ &R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\ &R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\ \hline \multirow{3}{*}{Grid R-CNN \cite{lu2019grid}} &R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\ &R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\ &R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\ \hline \multirow{3}{*}{RepPoints \cite{yang2019reppoints}} &R-18&20.11M&\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\ &R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\ &R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\ \hline \hline \emph{one-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{RetinaNet \cite{Lin2017Focal}} &R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\ &R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\ &R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\ \hline \multirow{3}{*}{FreeAnchor \cite{2019arXiv190902466Z}} &R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\ &R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\ &R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\ \hline \multirow{3}{*}{FoveaBox \cite{DBLP:journals/corr/abs-1904-03797}} &R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\ &R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\ &R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\ \hline \multirow{3}{*}{PAA \cite{2020arXiv200708103K}} &R-18&\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\ &R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\ &R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\ \hline \multirow{3}{*}{FSAF \cite{zhu2019feature}} &R-18&19.53M&38.88G&\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\ &R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\ &R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\ \hline \multirow{3}{*}{FCOS \cite{DBLP:journals/corr/abs-1904-01355}} &R-18&\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\ &R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\ &R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\ \hline \multirow{3}{*}{ATSS \cite{zhang2019bridging}} &R-18&\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\ &R-50&31.89M&51.55G&5.2&58.2&\bf 80.1&66.5&43.9&60.6&55.9&\bf 58.6&67.6&41.8&64.6\\ &R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\ \hline \multirow{3}{*}{GFL \cite{li2020generalized}} &R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\ &R-50&32.04M&52.35G&5.5&\bf 58.6&79.3&\bf 66.7&46.5&\bf 61.6&55.6&\bf 58.6&\bf 69.1&41.3&\bf 65.3\\ &R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\bf 56.3&57.0&\bf 69.1&\bf 43.0&64.0\\ \hline \end{tabular} \end{center} \end{table*} Therefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios. Consequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance. In order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch. \section{Conclusion} In this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development. \bibliographystyle{IEEEbib} Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the datasets used in this community for research? Only give me the answer and do not output any other words.
Answer:
[ "URPC2017, URPC2018, URPC2019, URPC2020_ZJ and URPC2020_DL." ]
276
7,001
single_doc_qa
47e428a801b0401daf28ef32d603ac80
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction While most NLP resources are English-specific, there have been several recent efforts to build multilingual benchmarks. One possibility is to collect and annotate data in multiple languages separately BIBREF0, but most existing datasets have been created through translation BIBREF1, BIBREF2. This approach has two desirable properties: it relies on existing professional translation services rather than requiring expertise in multiple languages, and it results in parallel evaluation sets that offer a meaningful measure of the cross-lingual transfer gap of different models. The resulting multilingual datasets are generally used for evaluation only, relying on existing English datasets for training. Closely related to that, cross-lingual transfer learning aims to leverage large datasets available in one language—typically English—to build multilingual models that can generalize to other languages. Previous work has explored 3 main approaches to that end: machine translating the test set into English and using a monolingual English model (Translate-Test), machine translating the training set into each target language and training the models on their respective languages (Translate-Train), or using English data to fine-tune a multilingual model that is then transferred to the rest of languages (Zero-Shot). The dataset creation and transfer procedures described above result in a mixture of original, human translated and machine translated data when dealing with cross-lingual models. In fact, the type of text a system is trained on does not typically match the type of text it is exposed to at test time: Translate-Test systems are trained on original data and evaluated on machine translated test sets, Zero-Shot systems are trained on original data and evaluated on human translated test sets, and Translate-Train systems are trained on machine translated data and evaluated on human translated test sets. Despite overlooked to date, we show that such mismatch has a notable impact in the performance of existing cross-lingual models. By using back-translation BIBREF3 to paraphrase each training instance, we obtain another English version of the training set that better resembles the test set, obtaining substantial improvements for the Translate-Test and Zero-Shot approaches in cross-lingual Natural Language Inference (NLI). While improvements brought by machine translation have previously been attributed to data augmentation BIBREF4, we reject this hypothesis and show that the phenomenon is only present in translated test sets, but not in original ones. Instead, our analysis reveals that this behavior is caused by subtle artifacts arising from the translation process itself. In particular, we show that translating different parts of each instance separately (e.g. the premise and the hypothesis in NLI) can alter superficial patterns in the data (e.g. the degree of lexical overlap between them), which severely affects the generalization ability of current models. Based on the gained insights, we improve the state-of-the-art in XNLI, and show that some previous findings need to be reconsidered in the light of this phenomenon. Related work ::: Cross-lingual transfer learning. Current cross-lingual models work by pre-training multilingual representations using some form of language modeling, which are then fine-tuned on the relevant task and transferred to different languages. Some authors leverage parallel data to that end BIBREF5, BIBREF6, but training a model akin to BERT BIBREF7 on the combination of monolingual corpora in multiple languages is also effective BIBREF8. Closely related to our work, BIBREF4 showed that replacing segments of the training data with their translation during fine-tuning is helpful. However, they attribute this behavior to a data augmentation effect, which we believe should be reconsidered given the new evidence we provide. Related work ::: Multilingual benchmarks. Most benchmarks covering a wide set of languages have been created through translation, as it is the case of XNLI BIBREF1 for NLI, PAWS-X BIBREF9 for adversarial paraphrase identification, and XQuAD BIBREF2 and MLQA BIBREF10 for Question Answering (QA). A notable exception is TyDi QA BIBREF0, a contemporaneous QA dataset that was separately annotated in 11 languages. Other cross-lingual datasets leverage existing multilingual resources, as it is the case of MLDoc BIBREF11 for document classification and Wikiann BIBREF12 for named entity recognition. Concurrent to our work, BIBREF13 combine some of these datasets into a single multilingual benchmark, and evaluate some well-known methods on it. Related work ::: Annotation artifacts. Several studies have shown that NLI datasets like SNLI BIBREF14 and MultiNLI BIBREF15 contain spurious patterns that can be exploited to obtain strong results without making real inferential decisions. For instance, BIBREF16 and BIBREF17 showed that a hypothesis-only baseline performs better than chance due to cues on their lexical choice and sentence length. Similarly, BIBREF18 showed that NLI models tend to predict entailment for sentence pairs with a high lexical overlap. Several authors have worked on adversarial datasets to diagnose these issues and provide a more challenging benchmark BIBREF19, BIBREF20, BIBREF21. Besides NLI, other tasks like QA have also been found to be susceptible to annotation artifacts BIBREF22, BIBREF23. While previous work has focused on the monolingual scenario, we show that translation can interfere with these artifacts in multilingual settings. Related work ::: Translationese. Translated texts are known to have unique features like simplification, explicitation, normalization and interference, which are refer to as translationese BIBREF24. This phenomenon has been reported to have a notable impact in machine translation evaluation BIBREF25, BIBREF26. For instance, back-translation brings large BLEU gains for reversed test sets (i.e. when translationese is on the source side and original text is used as reference), but its effect diminishes in the natural direction BIBREF27. While connected, the phenomenon we analyze is different in that it arises from translation inconsistencies due to the lack of context, and affects cross-lingual transfer learning rather than machine translation. Experimental design Our goal is to analyze the effect of both human and machine translation in cross-lingual models. For that purpose, the core idea of our work is to (i) use machine translation to either translate the training set into other languages, or generate English paraphrases of it through back-translation, and (ii) evaluate the resulting systems on original, human translated and machine translated test sets in comparison with systems trained on original data. We next describe the models used in our experiments (§SECREF6), the specific training variants explored (§SECREF8), and the evaluation procedure followed (§SECREF10). Experimental design ::: Models and transfer methods We experiment with two models that are representative of the state-of-the-art in monolingual and cross-lingual pre-training: (i) Roberta BIBREF28, which is an improved version of BERT that uses masked language modeling to pre-train an English Transformer model, and (ii) XLM-R BIBREF8, which is a multilingual extension of the former pre-trained on 100 languages. In both cases, we use the large models released by the authors under the fairseq repository. As discussed next, we explore different variants of the training set to fine-tune each model on different tasks. At test time, we try both machine translating the test set into English (Translate-Test) and, in the case of XLM-R, using the actual test set in the target language (Zero-Shot). Experimental design ::: Training variants We try 3 variants of each training set to fine-tune our models: (i) the original one in English (Orig), (ii) an English paraphrase of it generated through back-translation using Spanish or Finnish as pivot (BT-ES and BT-FI), and (iii) a machine translated version in Spanish or Finnish (MT-ES and MT-FI). For sentences occurring multiple times in the training set (e.g. premises repeated for multiple hypotheses), we use the exact same translation for all occurrences, as our goal is to understand the inherent effect of translation rather than its potential application as a data augmentation method. In order to train the machine translation systems for MT-XX and BT-XX, we use the big Transformer model BIBREF29 with the same settings as BIBREF30 and SentencePiece tokenization BIBREF31 with a joint vocabulary of 32k subwords. For English-Spanish, we train for 10 epochs on all parallel data from WMT 2013 BIBREF32 and ParaCrawl v5.0 BIBREF33. For English-Finnish, we train for 40 epochs on Europarl and Wiki Titles from WMT 2019 BIBREF34, ParaCrawl v5.0, and DGT, EUbookshop and TildeMODEL from OPUS BIBREF35. In both cases, we remove sentences longer than 250 tokens, with a source/target ratio exceeding 1.5, or for which langid.py BIBREF36 predicts a different language, resulting in a final corpus size of 48M and 7M sentence pairs, respectively. We use sampling decoding with a temperature of 0.5 for inference, which produces more diverse translations than beam search BIBREF37 and performed better in our preliminary experiments. Experimental design ::: Tasks and evaluation procedure We use the following tasks for our experiments: Experimental design ::: Tasks and evaluation procedure ::: Natural Language Inference (NLI). Given a premise and a hypothesis, the task is to determine whether there is an entailment, neutral or contradiction relation between them. We fine-tune our models on MultiNLI BIBREF15 for 10 epochs using the same settings as BIBREF28. In most of our experiments, we evaluate on XNLI BIBREF1, which comprises 2490 development and 5010 test instances in 15 languages. These were originally annotated in English, and the resulting premises and hypotheses were independently translated into the rest of the languages by professional translators. For the Translate-Test approach, we use the machine translated versions from the authors. Following BIBREF8, we select the best epoch checkpoint according to the average accuracy in the development set. Experimental design ::: Tasks and evaluation procedure ::: Question Answering (QA). Given a context paragraph and a question, the task is to identify the span answering the question in the context. We fine-tune our models on SQuAD v1.1 BIBREF38 for 2 epochs using the same settings as BIBREF28, and report test results for the last epoch. We use two datasets for evaluation: XQuAD BIBREF2, a subset of the SQuAD development set translated into 10 other languages, and MLQA BIBREF10 a dataset consisting of parallel context paragraphs plus the corresponding questions annotated in English and translated into 6 other languages. In both cases, the translation was done by professional translators at the document level (i.e. when translating a question, the text answering it was also shown). For our BT-XX and MT-XX variants, we translate the context paragraph and the questions independently, and map the answer spans using the same procedure as BIBREF39. For the Translate-Test approach, we use the official machine translated versions of MLQA, run inference over them, and map the predicted answer spans back to the target language. Both for NLI and QA, we run each system 5 times with different random seeds and report the average results. Space permitting, we also report the standard deviation across the 5 runs. NLI experiments We next discuss our main results in the XNLI development set (§SECREF15, §SECREF16), run additional experiments to better understand the behavior of our different variants (§SECREF17, §SECREF22, §SECREF25), and compare our results to previous work in the XNLI test set (§SECREF30). NLI experiments ::: Translate-Test results We start by analyzing XNLI development results for Translate-Test. Recall that, in this approach, the test set is machine translated into English, but training is typically done on original English data. Our BT-ES and BT-FI variants close this gap by training on a machine translated English version of the training set generated through back-translation. As shown in Table TABREF9, this brings substantial gains for both Roberta and XLM-R, with an average improvement of 4.6 points in the best case. Quite remarkably, MT-ES and MT-FI also outperform Orig by a substantial margin, and are only 0.8 points below their BT-ES and BT-FI counterparts. Recall that, for these two systems, training is done in machine translated Spanish or Finnish, while inference is done in machine translated English. This shows that the loss of performance when generalizing from original data to machine translated data is substantially larger than the loss of performance when generalizing from one language to another. NLI experiments ::: Zero-Shot results We next analyze the results for the Zero-Shot approach. In this case, inference is done in the test set in each target language which, in the case of XNLI, was human translated from English. As such, different from the Translate-Test approach, neither training on original data (Orig) nor training on machine translated data (BT-XX and MT-XX) makes use of the exact same type of text that the system is exposed to at test time. However, as shown in Table TABREF9, both BT-XX and MT-XX outperform Orig by approximately 2 points, which suggests that our (back-)translated versions of the training set are more similar to the human translated test sets than the original one. This also provides a new perspective on the Translate-Train approach, which was reported to outperform Orig in previous work BIBREF5: while the original motivation was to train the model on the same language that it is tested on, our results show that machine translating the training set is beneficial even when the target language is different. NLI experiments ::: Original vs. translated test sets So as to understand whether the improvements observed so far are limited to translated test sets or apply more generally, we conduct additional experiments comparing translated test sets to original ones. However, to the best of our knowledge, all existing non-English NLI benchmarks were created through translation. For that reason, we build a new test set that mimics XNLI, but is annotated in Spanish rather than English. We first collect the premises from a filtered version of CommonCrawl BIBREF42, taking a subset of 5 websites that represent a diverse set of genres: a newspaper, an economy forum, a celebrity magazine, a literature blog, and a consumer magazine. We then ask native Spanish annotators to generate an entailment, a neutral and a contradiction hypothesis for each premise. We collect a total of 2490 examples using this procedure, which is the same size as the XNLI development set. Finally, we create a human translated and a machine translated English version of the dataset using professional translators from Gengo and our machine translation system described in §SECREF8, respectively. We report results for the best epoch checkpoint on each set. As shown in Table TABREF18, both BT-XX and MT-XX clearly outperform Orig in all test sets created through translation, which is consistent with our previous results. In contrast, the best results on the original English set are obtained by Orig, and neither BT-XX nor MT-XX obtain any clear improvement on the one in Spanish either. This confirms that the underlying phenomenon is limited to translated test sets. In addition, it is worth mentioning that the results for the machine translated test set in English are slightly better than those for the human translated one, which suggests that the difficulty of the task does not only depend on the translation quality. Finally, it is also interesting that MT-ES is only marginally better than MT-FI in both Spanish test sets, even if it corresponds to the Translate-Train approach, whereas MT-FI needs to Zero-Shot transfer from Finnish into Spanish. This reinforces the idea that it is training on translated data rather than training on the target language that is key in Translate-Train. NLI experiments ::: Stress tests In order to better understand how systems trained on original and translated data differ, we run additional experiments on the NLI Stress Tests BIBREF19, which were designed to test the robustness of NLI models to specific linguistic phenomena in English. The benchmark consists of a competence test, which evaluates the ability to understand antonymy relation and perform numerical reasoning, a distraction test, which evaluates the robustness to shallow patterns like lexical overlap and the presence of negation words, and a noise test, which evaluates robustness to spelling errors. Just as with previous experiments, we report results for the best epoch checkpoint in each test set. As shown in Table TABREF23, Orig outperforms BT-FI and MT-FI on the competence test by a large margin, but the opposite is true on the distraction test. In particular, our results show that BT-FI and MT-FI are less reliant on lexical overlap and the presence of negative words. This feels intuitive, as translating the premise and hypothesis independently—as BT-FI and MT-FI do—is likely to reduce the lexical overlap between them. More generally, the translation process can alter similar superficial patterns in the data, which NLI models are sensitive to (§SECREF2). This would explain why the resulting models have a different behavior on different stress tests. NLI experiments ::: Output class distribution With the aim to understand the effect of the previous phenomenon in cross-lingual settings, we look at the output class distribution of our different models in the XNLI development set. As shown in Table TABREF28, the predictions of all systems are close to the true class distribution in the case of English. Nevertheless, Orig is strongly biased for the rest of languages, and tends to underpredict entailment and overpredict neutral. This can again be attributed to the fact that the English test set is original, whereas the rest are human translated. In particular, it is well-known that NLI models tend to predict entailment when there is a high lexical overlap between the premise and the hypothesis (§SECREF2). However, the degree of overlap will be smaller in the human translated test sets given that the premise and the hypothesis were translated independently, which explains why entailment is underpredicted. In contrast, BT-FI and MT-FI are exposed to the exact same phenomenon during training, which explains why they are not that heavily affected. So as to measure the impact of this phenomenon, we explore a simple approach to correct this bias: having fine-tuned each model, we adjust the bias term added to the logit of each class so the model predictions match the true class distribution for each language. As shown in Table TABREF29, this brings large improvements for Orig, but is less effective for BT-FI and MT-FI. This shows that the performance of Orig was considerably hindered by this bias, which BT-FI and MT-FI effectively mitigate. NLI experiments ::: Comparison with the state-of-the-art So as to put our results into perspective, we compare our best variant to previous work on the XNLI test set. As shown in Table TABREF31, our method improves the state-of-the-art for both the Translate-Test and the Zero-Shot approaches by 4.3 and 2.8 points, respectively. It also obtains the best overall results published to date, with the additional advantage that the previous state-of-the-art required a machine translation system between English and each of the 14 target languages, whereas our method uses a single machine translation system between English and Finnish (which is not one of the target languages). While the main goal of our work is not to design better cross-lingual models, but to analyze their behavior in connection to translation, this shows that the phenomenon under study is highly relevant, to the extent that it can be exploited to improve the state-of-the-art. QA experiments So as to understand whether our previous findings apply to other tasks besides NLI, we run additional experiments on QA. As shown in Table TABREF32, BT-FI and BT-ES do indeed outperform Orig for the Translate-Test approach on MLQA. The improvement is modest, but very consistent across different languages, models and runs. The results for MT-ES and MT-FI are less conclusive, presumably because mapping the answer spans across languages might introduce some noise. In contrast, we do not observe any clear improvement for the Zero-Shot approach on this dataset. Our XQuAD results in Table TABREF33 are more positive, but still inconclusive. These results can partly be explained by the translation procedure used to create the different benchmarks: the premises and hypotheses of XNLI were translated independently, whereas the questions and context paragraphs of XQuAD were translated together. Similarly, MLQA made use of parallel contexts, and translators were shown the sentence containing each answer when translating the corresponding question. As a result, one can expect both QA benchmarks to have more consistent translations than XNLI, which would in turn diminish this phenomenon. In contrast, the questions and context paragraphs are independently translated when using machine translation, which explains why BT-ES and BT-FI outperform Orig for the Translate-Test approach. We conclude that the translation artifacts revealed by our analysis are not exclusive to NLI, as they also show up on QA for the Translate-Test approach, but their actual impact can be highly dependent on the translation procedure used and the nature of the task. Discussion Our analysis prompts to reconsider previous findings in cross-lingual transfer learning as follows: Discussion ::: The cross-lingual transfer gap on XNLI was overestimated. Given the parallel nature of XNLI, accuracy differences across languages are commonly interpreted as the loss of performance when generalizing from English to the rest of languages. However, our work shows that there is another factor that can have a much larger impact: the loss of performance when generalizing from original to translated data. Our results suggest that the real cross-lingual generalization ability of XLM-R is considerably better than what the accuracy numbers in XNLI reflect. Discussion ::: Overcoming the cross-lingual gap is not what makes Translate-Train work. The original motivation for Translate-Train was to train the model on the same language it is tested on. However, we show that it is training on translated data, rather than training on the target language, that is key for this approach to outperform Zero-Shot as reported by previous authors. Discussion ::: Improvements previously attributed to data augmentation should be reconsidered. The method by BIBREF4 combines machine translated premises and hypotheses in different languages (§SECREF2), resulting in an effect similar to BT-XX and MT-XX. As such, we believe that this method should be analyzed from the point of view of dataset artifacts rather than data augmentation, as the authors do. From this perspective, having the premise and the hypotheses in different languages can reduce the superficial patterns between them, which would explain why this approach is better than using examples in a single language. Discussion ::: The potential of Translate-Test was underestimated. The previous best results for Translate-Test on XNLI lagged behind the state-of-the-art by 4.6 points. Our work reduces this gap to only 0.8 points by addressing the underlying translation artifacts. The reason why Translate-Test is more severely affected by this phenomenon is twofold: (i) the effect is doubled by first using human translation to create the test set and then machine translation to translate it back to English, and (ii) Translate-Train was inadvertently mitigating this issue (see above), but equivalent techniques were never applied to Translate-Test. Discussion ::: Future evaluation should better account for translation artifacts. The evaluation issues raised by our analysis do not have a simple solution. In fact, while we use the term translation artifacts to highlight that they are an unintended effect of translation that impacts final evaluation, one could also argue that it is the original datasets that contain the artifacts, which translation simply alters or even mitigates. In any case, this is a more general issue that falls beyond the scope of cross-lingual transfer learning, so we argue that it should be carefully controlled when evaluating cross-lingual models. In the absence of more robust datasets, we recommend that future multilingual benchmarks should at least provide consistent test sets for English and the rest of languages. This can be achieved by (i) using original annotations in all languages, (ii) using original annotations in a non-English language and translating them into English and other languages, or (iii) if translating from English, doing so at the document level to minimize translation inconsistencies. Conclusions In this paper, we have shown that both human and machine translation can alter superficial patterns in data, which requires reconsidering previous findings in cross-lingual transfer learning. Based on the gained insights, we have improved the state-of-the-art in XNLI for the Translate-Test and Zero-Shot approaches by a substantial margin. Finally, we have shown that the phenomenon is not specific to NLI but also affects QA, although it is less pronounced there thanks to the translation procedure used in the corresponding benchmarks. So as to facilitate similar studies in the future, we release our NLI dataset, which, unlike previous benchmarks, was annotated in a non-English language and human translated into English. Acknowledgments We thank Nora Aranberri and Uxoa Iñurrieta for helpful discussion during the development of this work, as well as the rest of our colleagues from the IXA group that worked as annotators for our NLI dataset. This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017‐91692‐EXP MCIU/AEI/FEDER, UE), Project BigKnowledge (Ayudas Fundación BBVA a equipos de investigación científica 2018), and the NVIDIA GPU grant program. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the languages they use in their experiment? Only give me the answer and do not output any other words.
Answer:
[ "English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nChinese\nHindi\nSwahili\nUrdu\nFinnish", "English, Spanish, Finnish" ]
276
5,414
single_doc_qa
b6e15b89fc5846d69cc8cf0a4648ecd2
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Environmental variability and network structure determine the optimal plasticity mechanisms in embodied agents Publish Date: Unkown Author List: Sina Khajehabdollahi (from Department of Computer Science, University of Tübingen) Figure Figure2: An outline of the network controlling the foraging agent.The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig.1.The output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent.These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent Figure4: The evolved parameters θ = (θ 1 , . . ., θ 8 ) of the plasticity rule for the reward prediction (a.) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . ., 1, and σ ∈ 0, 0.1, . . ., 1 in all 100 combinations).Despite the relatively small difference between the tasks, the evolved learning rules differ considerably.For visual guidance, the lines connect θs from the same run. Figure5: a.The trajectory of an agent (blue line) in the 2D environment.A well-trained agent will approach and consume food with positive values (green dots) and avoid negative food (red dots).b.The learning rate of the plastic sensory network eta p grows with the distance between environments d e c. and decreases with the frequency of environmental change.d.The fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network.e.The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red).In this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food. abstract The evolutionary balance between innate and learned behaviors is highly intricate, and different organisms have found different solutions to this problem. We hypothesize that the emergence and exact form of learning behaviors is naturally connected with the statistics of environmental fluctuations and tasks an organism needs to solve. Here, we study how different aspects of simulated environments shape an evolved synaptic plasticity rule in static and moving artificial agents. We demonstrate that environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity. Interestingly, the form of the emerging plasticity rule is additionally determined by the details of the task the artificial organisms are aiming to solve. Moreover, we show that coevolution between static connectivity and interacting plasticity mechanisms in distinct sub-networks changes the function and form of the emerging plasticity rules in embodied agents performing a foraging task. One of the defining features of living organisms is their ability to adapt to their environment and incorporate new information to modify their behavior. It is unclear how the ability to learn first evolved , but its utility appears evident. Natural environments are too complex for all the necessary information to be hardcoded genetically and more importantly, they keep changing during an organism's lifetime in ways that cannot be anticipated ; . The link between learning and environmental uncertainty and fluctuation has been extensively demonstrated in both natural ; , and artificial environments . Nevertheless, the ability to learn does not come without costs. For the capacity to learn to be beneficial in evolutionary terms, a costly nurturing period is often required, a phenomenon observed in both biological , and artificial organisms . Additionally, it has been shown that in some complex environments, hardcoded behaviors may be superior to learned ones given limits in the agent's lifetime and envi-ronmental uncertainty ; ; . The theoretical investigation of the optimal balance between learned and innate behaviors in natural and artificial systems goes back several decades. However, it has recently found also a wide range of applications in applied AI systems ; . Most AI systems are trained for specific tasks, and have no need for modification after their training has been completed. Still, technological advances and the necessity to solve broad families of tasks make discussions about life-like AI systems relevant to a wide range of potential application areas. Thus the idea of open-ended AI agents that can continually interact with and adapt to changing environments has become particularly appealing. Many different approaches for introducing lifelong learning in artificial agents have been proposed. Some of them draw direct inspiration from actual biological systems ; . Among them, the most biologically plausible solution is to equip artificial neural networks with some local neural plasticity , similar to the large variety of synaptic plasticity mechanisms ; ; that performs the bulk of the learning in the brains of living organisms . The artificial plasticity mechanisms can be optimized to modify the connectivity of the artificial neural networks toward solving a particular task. The optimization can use a variety of approaches, most commonly evolutionary computation. The idea of meta-learning or optimizing synaptic plasticity rules to perform specific functions has been recently established as an engineering tool that can compete with stateof-the-art machine learning algorithms on various complex tasks ; ; Pedersen and Risi (2021); . Additionally, it can be used to reverse engineer actual plasticity mechanisms found in biological neural networks and uncover their functions ; . Here, we study the effect that different factors (environ-arXiv:2303.06734v1 [q-bio.NC] 12 Mar 2023 mental fluctuation and reliability, task complexity) have on the form of evolved functional reward-modulated plasticity rules. We investigate the evolution of plasticity rules in static, single-layer simple networks. Then we increase the complexity by switching to moving agents performing a complex foraging task. In both cases, we study the impact of different environmental parameters on the form of the evolved plasticity mechanisms and the interaction of learned and static network connectivity. Interestingly, we find that different environmental conditions and different combinations of static and plastic connectivity have a very large impact on the resulting plasticity rules. We imagine an agent who must forage to survive in an environment presenting various types of complex food particles. Each food particle is composed of various amounts and combinations of N ingredients that can have positive (food) or negative (poison) values. The value of a food particle is a weighted sum of its ingredients. To predict the reward value of a given resource, the agent must learn the values of these ingredients by interacting with the environment. The priors could be generated by genetic memory, but the exact values are subject to change. To introduce environmental variability, we stochastically change the values of the ingredients. More precisely, we define two ingredient-value distributions E 1 and E 2 and switch between them, with probability p tr for every time step. We control how (dis)similar the environments are by parametrically setting E 2 = (1 − 2d e )E 1 , with d e ∈ [0, 1] serving as a distance proxy for the environments; when d e = 0, the environment remains unchanged, and when d e = 1 the value of each ingredient fully reverses when the environmental transition happens. For simplicity, we take values of the ingredients in E 1 equally spaced between -1 and 1 (for the visualization, see Fig. ). The static agent receives passively presented food as a vector of ingredients and can assess its compound value using the linear summation of its sensors with the (learned or evolved) weights, see Fig. . The network consists of N sensory neurons that are projecting to a single post-synaptic neuron. At each time step, an input X t = (x 1 , . . . , x N ) is presented, were the value x i , i ∈ {1, . . . , N } represents the quantity of the ingredient i. We draw x i independently form a uniform distribution on the [0, 1] interval (x i ∼ U (0, 1)). The value of each ingredient w c i is determined by the environment (E 1 or E 2 ). The postsynaptic neuron outputs a prediction of the food X t value as y t = g(W X T t ). Throughout the paper, g will be either the identity function, in which case the prediction neuron is linear, or a step-function; however, it could be any other nonlinearity, such as a sigmoid or ReLU. After outputting the prediction, the neuron receives feedback in the form of the real value of the input R t . The real value is computed as R t = W c X T t + ξ, where W c = (w c 1 , . . . , w c N ) is the actual value of the ingredients, and ξ is a term summarizing the noise of reward and sensing system ξ ∼ N (0, σ). Figure : An outline of the static agent's network. The sensor layer receives inputs representing the quantity of each ingredient of a given food at each time step. The agent computes the prediction of the food's value y t and is then given the true value R t ; it finally uses this information in the plasticity rule to update the weight matrix. For the evolutionary adjustment of the agent's parameters, the loss of the static agent is the sum of the mean squared errors (MSE) between its prediction y t and the reward R t over the lifetime of the agent. The agent's initial weights are set to the average of the two ingredient value distributions, which is the optimal initial value for the case of symmetric switching of environments that we consider here. As a next step, we incorporate the sensory network of static agents into embodied agents that can move around in an environment scattered with food. To this end, we merge the static agent's network with a second, non-plastic motor network that is responsible for controlling the motion of the agent in the environment. Specifically, the original plastic network now provides the agent with information about the value of the nearest food. The embodied agent has additional sensors for the distance from the nearest food, the angle between the current velocity and the nearest food direction, its own velocity, and its own energy level (sum of consumed food values). These inputs are processed by two hidden layers (of 30 and 15 neurons) with tanh activation. The network's outputs are angular and linear acceleration, Fig. . The embodied agents spawn in a 2D space with periodic boundary conditions along with a number of food particles that are selected such that the mean of the food value distribution is ∼ 0. An agent can eat food by approaching it sufficiently closely, and each time a food particle is eaten, it is The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig. . The output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent. These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent re-spawned with the same value somewhere randomly on the grid (following the setup of ). After 5000 time steps, the cumulative reward of the agent (the sum of the values of all the food it consumed) is taken as its fitness. During the evolutionary optimization, the parameters for both the motor network (connections) and plastic network (learning rule parameters) are co-evolved, and so agents must simultaneously learn to move and discriminate good/bad food. Reward-modulated plasticity is one of the most promising explanations for biological credit assignment . In our network, the plasticity rule that updates the weights of the linear sensor network is a rewardmodulated rule which is parameterized as a linear combination of the input, the output, and the reward at each time step: Additionally, after each plasticity step, the weights are normalized by mean subtraction, an important step for the stabilization of Hebbian-like plasticity rules . We use a genetic algorithm to optimize the learning rate η p and amplitudes of different terms θ = (θ 1 , . . . , θ 8 ). The successful plasticity rule after many food presentations must converge to a weight vector that predicts the correct food values (or allows the agent to correctly decide whether to eat a food or avoid it). To have comparable results, we divide θ = (θ 1 , . . . , θ 8 ) by We then multiply the learning rate η p with θ max to maintain the rule's evolved form unchanged, η norm p = η p • θ max . In the following, we always use normalized η p and θ, omitting norm . To evolve the plasticity rule and the moving agents' motor networks, we use a simple genetic algorithm with elitism . The agents' parameters are initialized at random (drawn from a Gaussian distribution), then the sensory network is trained by the plasticity rule and finally, the agents are evaluated. After each generation, the bestperforming agents (top 10 % of the population size) are selected and copied into the next generation. The remaining 90 % of the generation is repopulated with mutated copies of the best-performing agents. We mutate agents by adding independent Gaussian noise (σ = 0.1) to its parameters. To start with, we consider a static agent whose goal is to identify the value of presented food correctly. The static reward-prediction network quickly evolves the parameters of the learning rule, successfully solving the prediction task. We first look at the evolved learning rate η p , which determines how fast (if at all) the network's weight vector is updated during the lifetime of the agents. We identify three factors that control the learning rate parameter the EA converges to: the distance between the environments, the noisiness of the reward, and the rate of environmental transition. The first natural factor is the distance d e between the two environments, with a larger distance requiring a higher learning rate, Fig. . This is an expected result since the convergence time to the "correct" weights is highly dependent on the initial conditions. If an agent is born at a point very close to optimality, which naturally happens if the environments are similar, the distance it needs to traverse on the fitness landscape is small. Therefore it can afford to have a small learning rate, which leads to a more stable convergence and is not affected by noise. A second parameter that impacts the learning rate is the variance of the rewards. The reward an agent receives for the plasticity step contains a noise term ξ that is drawn from a zero mean Gaussian distribution with standard deviation σ. This parameter controls the unreliability of the agent's sensory system, i.e., higher σ means that the information the agent gets about the value of the foods it consumes cannot be fully trusted to reflect the actual value of the foods. As σ increases, the learning rate η p decreases, which means that the more unreliable an environment becomes, the less an agent relies on plasticity to update its weights, Fig. . Indeed for some combinations of relatively small distance d e and high reward variance σ, the EA converges to a learning rate of η p ≈ 0. This means that the agent opts to have no adaptation during its lifetime and remain at the mean of the two environments. It is an optimal solution when the expected loss due to ignoring the environmental transitions is, on average, lower than the loss the plastic network will incur by learning via the (often misleading because of the high σ) environmental cues. A final factor that affects the learning rate the EA will converge to is the frequency of environmental change during an agent's lifetime. Since the environmental change is modeled as a simple, two-state Markov process (Fig. ), the control parameter is the transition probability p tr . When keeping everything else the same, the learning rate rapidly rises as we increase the transition probability from 0, and after reaching a peak, it begins to decline slowly, eventually reaching zero (Fig. ). This means that when environmental transition is very rare, agents opt for a very low learning rate, allowing a slow and stable convergence to an environment-appropriate weight vector that leads to very low losses while the agent remains in that environment. As the rate of environmental transition increases, faster learning is required to speed up convergence in order to exploit the (comparatively shorter) stays in each environment. Finally, as the environmental transition becomes too fast, the agents opt for slower or even no learning, which keeps them ) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . . , 1, and σ ∈ 0, 0.1, . . . , 1 in all 100 combinations). Despite the relatively small difference between the tasks, the evolved learning rules differ considerably. For visual guidance, the lines connect θs from the same run. near the middle of the two environments, ensuring that the average loss of the two environments is minimal (Fig. ). The form of the evolved learning rule depends on the task: Decision vs. Prediction The plasticity parameters θ = (θ 1 , . . . , θ 8 ) for the rewardprediction task converge on approximately the same point, regardless of the environmental parameters (Fig. ). In particular, θ 3 → 1, θ 5 → −1, θ i → 0 for all other i, and thus the learning rule converges to: Since by definition y t = g(W t X T t ) = W t X T t (g(x) = x in this experiment) and R t = W c X T t + ξ we get: Thus the distribution of ∆W t converges to a distribution with mean 0 and variance depending on η p and σ and W converges to W c . So this learning rule will match the agent's weight vector with the vector of ingredient values in the environment. We examine the robustness of the learning rule the EA discovers by considering a slight modification of our task. Instead of predicting the expected food value, the agent now needs to decide whether to eat the presented food or not. This is done by introducing a step-function nonlinearity (g(x) = 1 if x ≥ 1 and 0 otherwise). Then the output y(t) is computed as: Instead of the MSE loss between prediction and actual value, the fitness of the agent is now defined as the sum of the food values it chose to consume (by giving y t = 1). Besides these two changes, the setup of the experiments remains exactly the same. The qualitative relation between η p and parameters of environment d e , σ and p tr is preserved in the changed experiment. However, the resulting learning rule is significantly different (Fig. ). The evolution converges to the following learning rule: In both cases, the rule has the form ∆W t = η p X t [α y R t + β y ]. Thus, the ∆W t is positive or negative depending on whether the reward R t is above or below a threshold (γ = −β y /α y ) that depends on the output decision of the network (y t = 0 or 1). Both learning rules (for the reward-prediction and decision tasks) have a clear Hebbian form (coordination of preand post-synaptic activity) and use the incoming reward signal as a threshold. These similarities indicate some common organizing principles of reward-modulated learning rules, but their significant differences highlight the sensitivity of the optimization process to task details. We now turn to the moving embodied agents in the 2D environment. To optimize these agents, both the motor network's connections and the sensory network's plasticity parameters evolve simultaneously. Since the motor network is initially random and the agent has to move to find food, the number of interactions an agent experiences in its lifetime can be small, slowing down the learning. However, having the larger motor network also has benefits for evolution because it allows the output of the plastic network to be read out and transformed in different ways, resulting in a broad set of solutions. The fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network. e. The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red). In this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food. The agents can solve the task effectively by evolving a functional motor network and a plasticity rule that converges to interpretable weights (Fig. ). After ∼ 100 evolutionary steps (Fig. ), the agents can learn the ingredient value distribution using the plastic network and reliably move towards foods with positive values while avoiding the ones with negative values. We compare the dependence of the moving and the static agents on the parameters of the environment: d e and the state transition probability p tr . At first, in order to simplify the experiment, we set the transition probability to 0, but fixed the initial weights to be the average of E 1 and E 2 , while the real state is E 2 . In this experiment, the distance between states d e indicates twice the distance between the agent's initial weights and the optimal weights (the environment's ingredient values) since the agent is initialized at the mean of the two environment distributions. Same as for the static agent, the learning rate increases with the distance d e (Fig. ). Then, we examine the effect of the environmental transition probability p tr on the evolved learning rate η p . In order for an agent to get sufficient exposure to each environment, we scale down the probability p tr from the equivalent experiment for the static agents. We find that as the probability of transition increases, the evolved learning rate η p decreases (Fig. ). This fits with the larger trend for the static agent, although there is a clear difference when it comes to the increase for very small transition probabil-ities that were clearly identifiable in the static but not the moving agents. This could be due to much sparser data and possibly the insufficiently long lifetime of the moving agent (the necessity of scaling makes direct comparisons difficult). Nevertheless, overall we see that the associations observed in the static agents between environmental distance d e and transition probability p tr and the evolved learning rate η p are largely maintained in the moving agents. Still, more data would be needed to make any conclusive assertions about the exact effect of these environmental parameters on the emerging plasticity mechanisms. A crucial difference between the static and the moving agents is the function the plasticity has to perform. While in the static agents, the plasticity has to effectively identify the exact value distribution of the environment in order to produce accurate predictions, in the embodied agents, the plasticity has to merely produce a representation of the environment that the motor network can evolve to interpret adequately enough to make decisions about which food to consume. To illustrate the difference, we plot the Pearson correlation coefficient between an agent's weights and the ingredient values of the environment it is moving in (Fig. ). We use the correlation instead of the MSE loss (which we used for the static agents in Fig. ) because the amplitude of the vector varies a lot for different agents and meaningful The evolved parameters of moving agents' plasticity rule for the g(s) = x, identity (a.) and the step function (Eq. 4) (b.) sensory networks (the environmental parameters here are d e ∈ [0, 1], σ = 0 and p tr = 0.001). The step function (binary output) network evolved a more structured plasticity rule (e.g., θ 3 > 0 for all realizations) than the linear network. Moreover, the learned weights for the identity network (c.) have higher variance and correlate significantly less with the environment's ingredient distribution compared to the learned weights for the thresholded network (d.) conclusions cannot be drawn from the MSE loss. For many agents, the learned weights are consistently anti-correlated with the actual ingredient values (an example of such an agent is shown in Fig. ). This means that the output of the sensory network will have the opposite sign from the actual food value. While in the static network, this would lead to very bad predictions and high loss, in the foraging task, these agents perform exactly as well as the ones where the weights and ingredients values are positively correlated, since the motor network can simply learn to move towards food for which it gets a negative instead of a positive sensory input. This additional step of the output of the plastic network going through the motor network before producing any behavior has a strong effect on the plasticity rules that the embodied agents evolve. Specifically, if we look at the emerging rules the top performing agents have evolved (Fig. ), it becomes clear that, unlike the very well-structured rules of the static agents (Fig. ), there is now virtually no discernible pattern or structure. The difference becomes even clearer if we look at the learned weights (at the end of a simulation) of the best-performing agents (Fig. ). While there is some correlation with the environment's ingredient value distribution, the variance is very large, and they do not seem to converge on the "correct" values in any way. This is to some extent expected since, unlike the static agents where the network's output has to be exactly correct, driving the evolution of rules that converge to the precise environmental distribution, in the embodied networks, the bulk of the processing is done by the motor network which can evolve to interpret the scalar value of the sensory network's output in a variety of ways. Thus, as long as the sensory network's plasticity rule co-evolves with the motor network, any plasticity rule that learns to produce consistent information about the value of encountered food can potentially be selected. To further test this assumption, we introduce a bottleneck of information propagation between the sensory and motor networks by using a step-function nonlinearity on the output of the sensory network (Eq. 4). Similarly to the decision task of the static network, the output of the sensory network now becomes binary. This effectively reduces the flow of information from the sensory to the motor network, forcing the sensory network to consistently decide whether food should be consumed (with the caveat that the motor network can still interpret the binary sign in either of two ways, either consuming food marked with 1 or the ones marked with 0 by the sensory network). The agents perform equally well in this variation of the task as before (Fig. ), but now, the evolved plasticity rules seem to be more structured (Fig. ). Moreover, the variance of the learned weights in the bestperforming agents is significantly reduced (Fig. ), which indicates that the bottleneck in the sensory network is in-creasing selection pressure for rules that learn the environment's food distribution accurately. We find that different sources of variability have a strong impact on the extent to which evolving agents will develop neuronal plasticity mechanisms for adapting to their environment. A diverse environment, a reliable sensory system, and a rate of environmental change that is neither too large nor too small are necessary conditions for an agent to be able to effectively adapt via synaptic plasticity. Additionally, we find that minor variations of the task an agent has to solve or the parametrization of the network can give rise to significantly different plasticity rules. Our results partially extend to embodied artificial agents performing a foraging task. We show that environmental variability also pushes the development of plasticity in such agents. Still, in contrast to the static agents, we find that the interaction of a static motor network with a plastic sensory network gives rise to a much greater variety of wellfunctioning learning rules. We propose a potential cause of this degeneracy; as the relatively complex motor network is allowed to read out and process the outputs from the plastic network, any consistent information coming out of these outputs can be potentially interpreted in a behaviorally useful way. Reducing the information the motor network can extract from the sensory system significantly limits learning rule variability. Our findings on the effect of environmental variability concur with the findings of previous studies that have identified the constraints that environmental variability places on the evolutionary viability of learning behaviors. We extend these findings in a mechanistic model which uses a biologically plausible learning mechanism (synaptic plasticity). We show how a simple evolutionary algorithm can optimize the different parameters of a simple reward-modulated plasticity rule for solving simple prediction and decision tasks. Reward-modulated plasticity has been extensively studied as a plausible mechanism for credit assignment in the brain ; ; and has found several applications in artificial intelligence and robotics tasks ; . Here, we demonstrate how such rules can be very well-tuned to take into account different environmental parameters and produce optimal behavior in simple systems. Additionally, we demonstrate how the co-evolution of plasticity and static functional connectivity in different subnetworks fundamentally changes the evolutionary pressures on the resulting plasticity rules, allowing for greater diversity in the form of the learning rule and the resulting learned connectivity. Several studies have demonstrated how, in biological networks, synaptic plasticity heavily interacts with and is driven by network topology . Moreover, it has been recently demonstrated that biological plasticity mechanisms are highly redundant in the sense that any observed neural connectivity or recorded activity can be achieved with a variety of distinct, unrelated learning rules . This observed redundancy of learning rules in biological settings complements our results and suggests that the function of plasticity rules cannot be studied independently of the connectivity and topology of the networks they are acting on. The optimization of functional plasticity in neural networks is a promising research direction both as a means to understand biological learning processes and as a tool for building more autonomous artificial systems. Our results suggest that reward-modulated plasticity is highly adaptable to different environments and can be incorporated into larger systems that solve complex tasks. This work studies a simplified toy model of neural network learning in stochastic environments. Future work could be built on this basic framework to examine more complex reward distributions and sources of environmental variability. Moreover, a greater degree of biological realism could be added by studying more plausible network architectures (multiple plastic layers, recurrent and feedback connections) and more sophisticated plasticity rule parametrizations. Additionally, our foraging simulations were constrained by limited computational resources and were far from exhaustive. Further experiments can investigate environments with different constraints, food distributions, multiple seasons, more complex motor control systems and interactions of those systems with different sensory networks as well as the inclusion of plasticity on the motor parts of the artificial organisms. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What factors control the reliance of artificial organisms on plasticity? Only give me the answer and do not output any other words.
Answer:
[ "Environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity." ]
276
6,320
single_doc_qa
5a703fabd22d4c8b846cf8b8e56b33f9
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post. In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification. In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions. Extra-Linguistic Features for Stance Classification In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights. For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance. Deep Learning on Extra-Linguistic Features In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models. Others have suggested using extra-linguistic features to improve the deep learning model. The user-word composition vector model (UWCVM) BIBREF19 is inspired by the possibility that the strength of sentiment words is user-specific; to capture this they add user embeddings in their model. In UPNN, a later extension, they further add a product-word composition as product embeddings, arguing that products can also show different tendencies of being rated or reviewed BIBREF20 . Their addition of user information yielded 2–10% improvements in accuracy as compared to the above-mentioned RNTN and paragraph vector methods. We also seek to inject user information into the neural network model. In comparison to the research of Tang et al. on sentiment classification for product reviews, the difference is two-fold. First, we take into account multiple users (one author and potentially many likers) for one post, whereas only one user (the reviewer) is involved in a review. Second, we add comment information to provide more features for post stance classification. None of these two factors have been considered previously in a deep learning model for text stance classification. Therefore, we propose UTCNN, which generates and utilizes user embeddings for all users — even for those who have not authored any posts — and incorporates comments to further improve performance. Method In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN). User- and Topic-dependent Document Composition As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0 where INLINEFORM0 models the user reading preference for certain semantics, and INLINEFORM1 models the topic semantics; INLINEFORM2 and INLINEFORM3 are the dimensions of transformed user and topic embeddings respectively. We use INLINEFORM4 to model semantically what each user prefers to read and/or write, and use INLINEFORM5 to model the semantics of each topic. The dot operation of INLINEFORM6 and INLINEFORM7 transforms the global representation INLINEFORM8 to a user-dependent representation. Likewise, the dot operation of INLINEFORM9 and INLINEFORM10 transforms INLINEFORM11 to a topic-dependent representation. After the two dot operations on INLINEFORM0 , we have user-dependent and topic-dependent word vectors INLINEFORM1 and INLINEFORM2 , which are concatenated to form a user- and topic-dependent word vector INLINEFORM3 . Then the transformed word embeddings INLINEFORM4 are used as the CNN input. Here we apply three convolutional layers on the concatenated transformed word embeddings INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 is the index of words; INLINEFORM1 is a non-linear activation function (we use INLINEFORM2 ); INLINEFORM5 is the convolutional filter with input length INLINEFORM6 and output length INLINEFORM7 , where INLINEFORM8 is the window size of the convolutional operation; and INLINEFORM9 and INLINEFORM10 are the output and bias of the convolution layer INLINEFORM11 , respectively. In our experiments, the three window sizes INLINEFORM12 in the three convolution layers are one, two, and three, encoding unigram, bigram, and trigram semantics accordingly. After the convolutional layer, we add a maximum pooling layer among convolutional outputs to obtain the unigram, bigram, and trigram n-gram representations. This is succeeded by an average pooling layer for an element-wise average of the three maximized convolution outputs. UTCNN Model Description Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic. As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains. Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post. Experiment We start with the experimental dataset and then describe the training process as well as the implementation of the baselines. We also implement several variations to reveal the effects of features: authors, likers, comment, and commenters. In the results section we compare our model with related work. Dataset We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms. The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero. To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach. The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 . The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information. Settings In the UTCNN training process, cross-entropy was used as the loss function and AdaGrad as the optimizer. For FBFans dataset, we learned the 50-dimensional word embeddings on the whole dataset using GloVe BIBREF21 to capture the word semantics; for CreateDebate dataset we used the publicly available English 50-dimensional word embeddings, pre-trained also using GloVe. These word embeddings were fixed in the training process. The learning rate was set to 0.03. All user and topic embeddings were randomly initialized in the range of [-0.1 0.1]. Matrix embeddings for users and topics were sized at 250 ( INLINEFORM0 ); vector embeddings for users and topics were set to length 10. We applied the LDA topic model BIBREF22 on the FBFans dataset to determine the latent topics with which to build topic embeddings, as there is only one general known topic: nuclear power plants. We learned 100 latent topics and assigned the top three topics for each post. For the CreateDebate dataset, which itself constitutes four topics, the topic labels for posts were used directly without additionally applying LDA. For the FBFans data we report class-based f-scores as well as the macro-average f-score ( INLINEFORM0 ) shown in equation EQREF19 . DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the average precision and recall of the three class. We adopted the macro-average f-score as the evaluation metric for the overall performance because (1) the experimental dataset is severely imbalanced, which is common for contentious issues; and (2) for stance classification, content in minor-class posts is usually more important for further applications. For the CreateDebate dataset, accuracy was adopted as the evaluation metric to compare the results with related work BIBREF7 , BIBREF9 , BIBREF12 . Baselines We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced. Results on FBFans Dataset In Table TABREF22 we show the results of UTCNN and the baselines on the FBFans dataset. Here Majority yields good performance on Neu since FBFans is highly biased to the neutral class. The SVM models perform well on Sup and Neu but perform poorly for Uns, showing that content information in itself is insufficient to predict stance labels, especially for the minor class. With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature. However, the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset. For the CNN and RCNN models, they perform slightly better than most of the SVM models but still, the content information is insufficient to achieve a good performance on the Uns posts. As to adding comment information to these models, since the commenters do not always hold the same stance as the author, simply adding comments and post contents together merely adds noise to the model. Among all UTCNN variations, we find that user information is most important, followed by topic and comment information. UTCNN without user information shows results similar to SVMs — it does well for Sup and Neu but detects no Uns. Its best f-scores on both Sup and Neu among all methods show that with enough training data, content-based models can perform well; at the same time, the lack of user information results in too few clues for minor-class posts to either predict their stance directly or link them to other users and posts for improved performance. The 17.5% improvement when adding user information suggests that user information is especially useful when the dataset is highly imbalanced. All models that consider user information predict the minority class successfully. UCTNN without topic information works well but achieves lower performance than the full UTCNN model. The 4.9% performance gain brought by LDA shows that although it is satisfactory for single topic datasets, adding that latent topics still benefits performance: even when we are discussing the same topic, we use different arguments and supporting evidence. Lastly, we get 4.8% improvement when adding comment information and it achieves comparable performance to UTCNN without topic information, which shows that comments also benefit performance. For platforms where user IDs are pixelated or otherwise hidden, adding comments to a text model still improves performance. In its integration of user, content, and comment information, the full UTCNN produces the highest f-scores on all Sup, Neu, and Uns stances among models that predict the Uns class, and the highest macro-average f-score overall. This shows its ability to balance a biased dataset and supports our claim that UTCNN successfully bridges content and user, topic, and comment information for stance classification on social media text. Another merit of UTCNN is that it does not require a balanced training data. This is supported by its outperforming other models though no oversampling technique is applied to the UTCNN related experiments as shown in this paper. Thus we can conclude that the user information provides strong clues and it is still rich even in the minority class. We also investigate the semantic difference when a user acts as an author/liker or a commenter. We evaluated a variation in which all embeddings from the same user were forced to be identical (this is the UTCNN shared user embedding setting in Table TABREF22 ). This setting yielded only a 2.5% improvement over the model without comments, which is not statistically significant. However, when separating authors/likers and commenters embeddings (i.e., the UTCNN full model), we achieved much greater improvements (4.8%). We attribute this result to the tendency of users to use different wording for different roles (for instance author vs commenter). This is observed when the user, acting as an author, attempts to support her argument against nuclear power by using improvements in solar power; when acting as a commenter, though, she interacts with post contents by criticizing past politicians who supported nuclear power or by arguing that the proposed evacuation plan in case of a nuclear accident is ridiculous. Based on this finding, in the final UTCNN setting we train two user matrix embeddings for one user: one for the author/liker role and the other for the commenter role. Results on CreateDebate Dataset Table TABREF24 shows the results of UTCNN, baselines as we implemented on the FBFans datset and related work on the CreateDebate dataset. We do not adopt oversampling on these models because the CreateDebate dataset is almost balanced. In previous work, integer linear programming (ILP) or linear-chain conditional random fields (CRFs) were proposed to integrate text features, author, ideology, and user-interaction constraints, where text features are unigram, bigram, and POS-dependencies; the author constraint tends to require that posts from the same author for the same topic hold the same stance; the ideology constraint aims to capture inferences between topics for the same author; the user-interaction constraint models relationships among posts via user interactions such as replies BIBREF7 , BIBREF9 . The SVM with n-gram or average word embedding feature performs just similar to the majority. However, with the transformed word embedding, it achieves superior results. It shows that the learned user and topic embeddings really capture the user and topic semantics. This finding is not so obvious in the FBFans dataset and it might be due to the unfavorable data skewness for SVM. As for CNN and RCNN, they perform slightly better than most SVMs as we found in Table TABREF22 for FBFans. Compared to the ILP BIBREF7 and CRF BIBREF9 methods, the UTCNN user embeddings encode author and user-interaction constraints, where the ideology constraint is modeled by the topic embeddings and text features are modeled by the CNN. The significant improvement achieved by UTCNN suggests the latent representations are more effective than overt model constraints. The PSL model BIBREF12 jointly labels both author and post stance using probabilistic soft logic (PSL) BIBREF23 by considering text features and reply links between authors and posts as in Hasan and Ng's work. Table TABREF24 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN. In contrast to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts. Though the PSL model has the advantage of being able to jointly label the stances of authors and posts, its performance on posts is lower than the that for the ILP or CRF models. UTCNN significantly outperforms these models on posts and has the potential to predict user stances through the generated user embeddings. For the CreateDebate dataset, we also evaluated performance when not using topic embeddings or user embeddings; as replies in this dataset are viewed as posts, the setting without comment embeddings is not available. Table TABREF24 shows the same findings as Table TABREF22 : the 21% improvement in accuracy demonstrates that user information is the most vital. This finding also supports the results in the related work: user constraints are useful and can yield 11.2% improvement in accuracy BIBREF7 . Further considering topic information yields 3.4% improvement, suggesting that knowing the subject of debates provides useful information. In sum, Table TABREF22 together with Table TABREF24 show that UTCNN achieves promising performance regardless of topic, language, data distribution, and platform. Conclusion We have proposed UTCNN, a neural network model that incorporates user, topic, content and comment information for stance classification on social media texts. UTCNN learns user embeddings for all users with minimum active degree, i.e., one post or one like. Topic information obtained from the topic model or the pre-defined labels further improves the UTCNN model. In addition, comment information provides additional clues for stance classification. We have shown that UTCNN achieves promising and balanced results. In the future we plan to explore the effectiveness of the UTCNN user embeddings for author stance classification. Acknowledgements Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the baselines? Only give me the answer and do not output any other words.
Answer:
[ "SVM with unigram, bigram, and trigram features, SVM with average word embedding, SVM with average transformed word embeddings, CNN, ecurrent Convolutional Neural Networks, SVM and deep learning models with comment information", "SVM with unigram, bigram, trigram features, with average word embedding, with average transformed word embeddings, CNN and RCNN, SVM, CNN, RCNN with comment information" ]
276
5,789
single_doc_qa
601840b145cc4a92828de36b130adbd6
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification. A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable. However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable. In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust. More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach. More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later. To summarize, the main contributions of this work are as follows: The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5. Method We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification. Generalized Expectation Criteria Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution. Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\rm x}, y)$ , a conditional model distribution $p_\theta (y|\rm x)$ , an empirical distribution $\tilde{p}({\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: $$S(E_{\tilde{p}({\rm x})}[E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]])$$ (Eq. 4) Learning from Labeled Features Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\hat{p}(y| x_k), k \in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\theta (y | x_k)$ , as a term of the objective function: $$\mathcal {O} = \sum _{k \in K} KL(\hat{p}(y|x_k) || p_\theta (y | x_k)) + \sum _{y,i} \frac{\theta _{yi}^2}{2 \sigma ^2}$$ (Eq. 6) where $\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\theta (y | x_k)$ can be expressed as follows: $ p_\theta (y | x_k) = \frac{1}{C_k} \sum _{\rm x} p_\theta (y|{\rm x})I(x_k) $ in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\rm x}$ and 0 otherwise, $C_k = \sum _{\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\theta (y|{\rm x})$ takes a softmax form as follows: $ p_\theta (y|{\rm x}) = \frac{1}{Z(\rm x)}\exp (\sum _i \theta _{yi}x_i). $ To solve the optimization problem, L-BFGS can be used for parameter estimation. In the framework of GE, this term can be obtained by setting the constraint function $G({\rm x}, y) = \frac{1}{C_k} \vec{I} (y)I(x_k)$ , where $\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere. Regularization Terms GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust. Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features. Formally, given a set of neutral features $K^{^{\prime }}$ , the uniform distribution is $\hat{p}_u(y|x_k) = \frac{1}{|C|}, k \in K^{^{\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes $$\mathcal {O}_{NE} = \mathcal {O} + \sum _{k \in K^{^{\prime }}} KL(\hat{p}_u(y|x_k) || p_\theta (y | x_k)).$$ (Eq. 9) Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully. Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: $$\mathcal {O}_{ME} = \mathcal {O} + \lambda \sum _{y} p(y) \log p(y)$$ (Eq. 11) where $p(y)$ is the predicted class distribution, given by $ p(y) = \frac{1}{|X|} \sum _{\rm x} p_\theta (y | \rm x). $ To control the influence of this term on the overall objective function, we can tune $\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\lambda $ to be proportional to the total number of labeled features, say $\lambda = \beta |K|$ . This maximum entropy term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ . Therefore, $E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]$ is just the model distribution $p_\theta (y|{\rm x})$ and its expectation with the empirical distribution $\tilde{p}(\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above. Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function. Given the preference class distribution $\hat{p}(y)$ , we modify the objective function as follows: $$\mathcal {O}_{KL} &= \mathcal {O} + \lambda KL(\hat{p}(y) || p(y))$$ (Eq. 13) Similarly, we set $\lambda = \beta |K|$ . This divergence term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ and setting the score function to $S(\hat{p}, p) = \sum _i \hat{p}_i \log \frac{\hat{p}_i}{p_i}$ , where $p$ and $\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later. Experiments In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria. Data Preparation We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features. The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation. As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ). Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\frac{0.9}{n}$ and with any other class is $\frac{0.1}{|C| - n}$ . Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments. With Unbalanced Labeled Features In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution. Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL. Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset. With Balanced Labeled Features We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with: (a) We randomly select $t \in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1). (b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it. With Unbalanced Class Distributions Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 . Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction. Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive. The Influence of λ\lambda We present the influence of $\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\lambda = \beta |K|$ , we just tune $\beta $ here. Note that when $\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\lambda $ in two settings: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 4 . As expected, $\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\lambda $ on both cases. Using LDA Selected Features We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features. Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets. LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly. Related Work There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge. Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process. MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge. Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper. There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model. Conclusion and Discussions This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research. We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features. The maximum entropy regularization term shows the strong ability of controlling unbalance. This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)). The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model. However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the three regularization terms? Only give me the answer and do not output any other words.
Answer:
[ "a regularization term associated with neutral features, the maximum entropy of class distribution regularization term, the KL divergence between reference and predicted class distribution", "a regularization term associated with neutral features, the maximum entropy of class distribution, KL divergence between reference and predicted class distribution" ]
276
4,943
single_doc_qa
76f8f5d1e6794288859ac58e4cd042b9
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction and Background Task-oriented dialogue systems are primarily designed to search and interact with large databases which contain information pertaining to a certain dialogue domain: the main purpose of such systems is to assist the users in accomplishing a well-defined task such as flight booking BIBREF0, tourist information BIBREF1, restaurant search BIBREF2, or booking a taxi BIBREF3. These systems are typically constructed around rigid task-specific ontologies BIBREF1, BIBREF4 which enumerate the constraints the users can express using a collection of slots (e.g., price range for restaurant search) and their slot values (e.g., cheap, expensive for the aforementioned slots). Conversations are then modelled as a sequence of actions that constrain slots to particular values. This explicit semantic space is manually engineered by the system designer. It serves as the output of the natural language understanding component as well as the input to the language generation component both in traditional modular systems BIBREF5, BIBREF6 and in more recent end-to-end task-oriented dialogue systems BIBREF7, BIBREF8, BIBREF9, BIBREF3. Working with such explicit semantics for task-oriented dialogue systems poses several critical challenges on top of the manual time-consuming domain ontology design. First, it is difficult to collect domain-specific data labelled with explicit semantic representations. As a consequence, despite recent data collection efforts to enable training of task-oriented systems across multiple domains BIBREF0, BIBREF3, annotated datasets are still few and far between, as well as limited in size and the number of domains covered. Second, the current approach constrains the types of dialogue the system can support, resulting in artificial conversations, and breakdowns when the user does not understand what the system can and cannot support. In other words, training a task-based dialogue system for voice-controlled search in a new domain always implies the complex, expensive, and time-consuming process of collecting and annotating sufficient amounts of in-domain dialogue data. In this paper, we present a demo system based on an alternative approach to task-oriented dialogue. Relying on non-generative response retrieval we describe the PolyResponse conversational search engine and its application in the task of restaurant search and booking. The engine is trained on hundreds of millions of real conversations from a general domain (i.e., Reddit), using an implicit representation of semantics that directly optimizes the task at hand. It learns what responses are appropriate in different conversational contexts, and consequently ranks a large pool of responses according to their relevance to the current user utterance and previous dialogue history (i.e., dialogue context). The technical aspects of the underlying conversational search engine are explained in detail in our recent work BIBREF11, while the details concerning the Reddit training data are also available in another recent publication BIBREF12. In this demo, we put focus on the actual practical usefulness of the search engine by demonstrating its potential in the task of restaurant search, and extending it to deal with multi-modal data. We describe a PolyReponse system that assists the users in finding a relevant restaurant according to their preference, and then additionally helps them to make a booking in the selected restaurant. Due to its retrieval-based design, with the PolyResponse engine there is no need to engineer a structured ontology, or to solve the difficult task of general language generation. This design also bypasses the construction of dedicated decision-making policy modules. The large ranking model already encapsulates a lot of knowledge about natural language and conversational flow. Since retrieved system responses are presented visually to the user, the PolyResponse restaurant search engine is able to combine text responses with relevant visual information (e.g., photos from social media associated with the current restaurant and related to the user utterance), effectively yielding a multi-modal response. This setup of using voice as input, and responding visually is becoming more and more prevalent with the rise of smart screens like Echo Show and even mixed reality. Finally, the PolyResponse restaurant search engine is multilingual: it is currently deployed in 8 languages enabling search over restaurants in 8 cities around the world. System snapshots in four different languages are presented in Figure FIGREF16, while screencast videos that illustrate the dialogue flow with the PolyResponse engine are available at: https://tinyurl.com/y3evkcfz. PolyResponse: Conversational Search The PolyResponse system is powered by a single large conversational search engine, trained on a large amount of conversational and image data, as shown in Figure FIGREF2. In simple words, it is a ranking model that learns to score conversational replies and images in a given conversational context. The highest-scoring responses are then retrieved as system outputs. The system computes two sets of similarity scores: 1) $S(r,c)$ is the score of a candidate reply $r$ given a conversational context $c$, and 2) $S(p,c)$ is the score of a candidate photo $p$ given a conversational context $c$. These scores are computed as a scaled cosine similarity of a vector that represents the context ($h_c$), and a vector that represents the candidate response: a text reply ($h_r$) or a photo ($h_p$). For instance, $S(r,c)$ is computed as $S(r,c)=C cos(h_r,h_c)$, where $C$ is a learned constant. The part of the model dealing with text input (i.e., obtaining the encodings $h_c$ and $h_r$) follows the architecture introduced recently by Henderson:2019acl. We provide only a brief recap here; see the original paper for further details. PolyResponse: Conversational Search ::: Text Representation. The model, implemented as a deep neural network, learns to respond by training on hundreds of millions context-reply $(c,r)$ pairs. First, similar to Henderson:2017arxiv, raw text from both $c$ and $r$ is converted to unigrams and bigrams. All input text is first lower-cased and tokenised, numbers with 5 or more digits get their digits replaced by a wildcard symbol #, while words longer than 16 characters are replaced by a wildcard token LONGWORD. Sentence boundary tokens are added to each sentence. The vocabulary consists of the unigrams that occur at least 10 times in a random 10M subset of the Reddit training set (see Figure FIGREF2) plus the 200K most frequent bigrams in the same random subset. During training, we obtain $d$-dimensional feature representations ($d=320$) shared between contexts and replies for each unigram and bigram jointly with other neural net parameters. A state-of-the-art architecture based on transformers BIBREF13 is then applied to unigram and bigram vectors separately, which are then averaged to form the final 320-dimensional encoding. That encoding is then passed through three fully-connected non-linear hidden layers of dimensionality $1,024$. The final layer is linear and maps the text into the final $l$-dimensional ($l=512$) representation: $h_c$ and $h_r$. Other standard and more sophisticated encoder models can also be used to provide final encodings $h_c$ and $h_r$, but the current architecture shows a good trade-off between speed and efficacy with strong and robust performance in our empirical evaluations on the response retrieval task using Reddit BIBREF14, OpenSubtitles BIBREF15, and AmazonQA BIBREF16 conversational test data, see BIBREF12 for further details. In training the constant $C$ is constrained to lie between 0 and $\sqrt{l}$. Following Henderson:2017arxiv, the scoring function in the training objective aims to maximise the similarity score of context-reply pairs that go together, while minimising the score of random pairings: negative examples. Training proceeds via SGD with batches comprising 500 pairs (1 positive and 499 negatives). PolyResponse: Conversational Search ::: Photo Representation. Photos are represented using convolutional neural net (CNN) models pretrained on ImageNet BIBREF17. We use a MobileNet model with a depth multiplier of 1.4, and an input dimension of $224 \times 224$ pixels as in BIBREF18. This provides a $1,280 \times 1.4 = 1,792$-dimensional representation of a photo, which is then passed through a single hidden layer of dimensionality $1,024$ with ReLU activation, before being passed to a hidden layer of dimensionality 512 with no activation to provide the final representation $h_p$. PolyResponse: Conversational Search ::: Data Source 1: Reddit. For training text representations we use a Reddit dataset similar to AlRfou:2016arxiv. Our dataset is large and provides natural conversational structure: all Reddit data from January 2015 to December 2018, available as a public BigQuery dataset, span almost 3.7B comments BIBREF12. We preprocess the dataset to remove uninformative and long comments by retaining only sentences containing more than 8 and less than 128 word tokens. After pairing all comments/contexts $c$ with their replies $r$, we obtain more than 727M context-reply $(c,r)$ pairs for training, see Figure FIGREF2. PolyResponse: Conversational Search ::: Data Source 2: Yelp. Once the text encoding sub-networks are trained, a photo encoder is learned on top of a pretrained MobileNet CNN, using data taken from the Yelp Open dataset: it contains around 200K photos and their captions. Training of the multi-modal sub-network then maximises the similarity of captions encoded with the response encoder $h_r$ to the photo representation $h_p$. As a result, we can compute the score of a photo given a context using the cosine similarity of the respective vectors. A photo will be scored highly if it looks like its caption would be a good response to the current context. PolyResponse: Conversational Search ::: Index of Responses. The Yelp dataset is used at inference time to provide text and photo candidates to display to the user at each step in the conversation. Our restaurant search is currently deployed separately for each city, and we limit the responses to a given city. For instance, for our English system for Edinburgh we work with 396 restaurants, 4,225 photos (these include additional photos obtained using the Google Places API without captions), 6,725 responses created from the structured information about restaurants that Yelp provides, converted using simple templates to sentences of the form such as “Restaurant X accepts credit cards.”, 125,830 sentences extracted from online reviews. PolyResponse: Conversational Search ::: PolyResponse in a Nutshell. The system jointly trains two encoding functions (with shared word embeddings) $f(context)$ and $g(reply)$ which produce encodings $h_c$ and $h_r$, so that the similarity $S(c,r)$ is high for all $(c,r)$ pairs from the Reddit training data and low for random pairs. The encoding function $g()$ is then frozen, and an encoding function $t(photo)$ is learnt which makes the similarity between a photo and its associated caption high for all (photo, caption) pairs from the Yelp dataset, and low for random pairs. $t$ is a CNN pretrained on ImageNet, with a shallow one-layer DNN on top. Given a new context/query, we then provide its encoding $h_c$ by applying $f()$, and find plausible text replies and photo responses according to functions $g()$ and $t()$, respectively. These should be responses that look like answers to the query, and photos that look like they would have captions that would be answers to the provided query. At inference, finding relevant candidates given a context reduces to computing $h_c$ for the context $c$ , and finding nearby $h_r$ and $h_p$ vectors. The response vectors can all be pre-computed, and the nearest neighbour search can be further optimised using standard libraries such as Faiss BIBREF19 or approximate nearest neighbour retrieval BIBREF20, giving an efficient search that scales to billions of candidate responses. The system offers both voice and text input and output. Speech-to-text and text-to-speech conversion in the PolyResponse system is currently supported by the off-the-shelf Google Cloud tools. Dialogue Flow The ranking model lends itself to the one-shot task of finding the most relevant responses in a given context. However, a restaurant-browsing system needs to support a dialogue flow where the user finds a restaurant, and then asks questions about it. The dialogue state for each search scenario is represented as the set of restaurants that are considered relevant. This starts off as all the restaurants in the given city, and is assumed to monotonically decrease in size as the conversation progresses until the user converges to a single restaurant. A restaurant is only considered valid in the context of a new user input if it has relevant responses corresponding to it. This flow is summarised here: S1. Initialise $R$ as the set of all restaurants in the city. Given the user's input, rank all the responses in the response pool pertaining to restaurants in $R$. S2. Retrieve the top $N$ responses $r_1, r_2, \ldots , r_N$ with corresponding (sorted) cosine similarity scores: $s_1 \ge s_2 \ge \ldots \ge s_N$. S3. Compute probability scores $p_i \propto \exp (a \cdot s_i)$ with $\sum _{i=1}^N p_i$, where $a>0$ is a tunable constant. S4. Compute a score $q_e$ for each restaurant/entity $e \in R$, $q_e = \sum _{i: r_i \in e} p_i$. S5. Update $R$ to the smallest set of restaurants with highest $q$ whose $q$-values sum up to more than a predefined threshold $t$. S6. Display the most relevant responses associated with the updated $R$, and return to S2. If there are multiple relevant restaurants, one response is shown from each. When only one restaurant is relevant, the top $N$ responses are all shown, and relevant photos are also displayed. The system does not require dedicated understanding, decision-making, and generation modules, and this dialogue flow does not rely on explicit task-tailored semantics. The set of relevant restaurants is kept internally while the system narrows it down across multiple dialogue turns. A simple set of predefined rules is used to provide a templatic spoken system response: e.g., an example rule is “One review of $e$ said $r$”, where $e$ refers to the restaurant, and $r$ to a relevant response associated with $e$. Note that while the demo is currently focused on the restaurant search task, the described “narrowing down” dialogue flow is generic and applicable to a variety of applications dealing with similar entity search. The system can use a set of intent classifiers to allow resetting the dialogue state, or to activate the separate restaurant booking dialogue flow. These classifiers are briefly discussed in §SECREF4. Other Functionality ::: Multilinguality. The PolyResponse restaurant search is currently available in 8 languages and for 8 cities around the world: English (Edinburgh), German (Berlin), Spanish (Madrid), Mandarin (Taipei), Polish (Warsaw), Russian (Moscow), Korean (Seoul), and Serbian (Belgrade). Selected snapshots are shown in Figure FIGREF16, while we also provide videos demonstrating the use and behaviour of the systems at: https://tinyurl.com/y3evkcfz. A simple MT-based translate-to-source approach at inference time is currently used to enable the deployment of the system in other languages: 1) the pool of responses in each language is translated to English by Google Translate beforehand, and pre-computed encodings of their English translations are used as representations of each foreign language response; 2) a provided user utterance (i.e., context) is translated to English on-the-fly and its encoding $h_c$ is then learned. We plan to experiment with more sophisticated multilingual models in future work. Other Functionality ::: Voice-Controlled Menu Search. An additional functionality enables the user to get parts of the restaurant menu relevant to the current user utterance as responses. This is achieved by performing an additional ranking step of available menu items and retrieving the ones that are semantically relevant to the user utterance using exactly the same methodology as with ranking other responses. An example of this functionality is shown in Figure FIGREF21. Other Functionality ::: Resetting and Switching to Booking. The restaurant search system needs to support the discrete actions of restarting the conversation (i.e., resetting the set $R$), and should enable transferring to the slot-based table booking flow. This is achieved using two binary intent classifiers, that are run at each step in the dialogue. These classifiers make use of the already-computed $h_c$ vector that represents the user's latest text. A single-layer neural net is learned on top of the 512-dimensional encoding, with a ReLU activation and 100 hidden nodes. To train the classifiers, sets of 20 relevant paraphrases (e.g., “Start again”) are provided as positive examples. Finally, when the system successfully switches to the booking scenario, it proceeds to the slot filling task: it aims to extract all the relevant booking information from the user (e.g., date, time, number of people to dine). The entire flow of the system illustrating both the search phase and the booking phase is provided as the supplemental video material. Conclusion and Future Work This paper has presented a general approach to search-based dialogue that does not rely on explicit semantic representations such as dialogue acts or slot-value ontologies, and allows for multi-modal responses. In future work, we will extend the current demo system to more tasks and languages, and work with more sophisticated encoders and ranking functions. Besides the initial dialogue flow from this work (§SECREF3), we will also work with more complex flows dealing, e.g., with user intent shifts. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Was PolyReponse evaluated against some baseline? Only give me the answer and do not output any other words.
Answer:
[ "No", "No" ]
276
3,748
single_doc_qa
bbb4d272e985457f9f3d885013717bf1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Is Arabic one of the 11 languages in CoVost? Only give me the answer and do not output any other words.
Answer:
[ "No", "No" ]
276
3,465
single_doc_qa
4a72c1abdc5e4e708bbf0457f109a992
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The Flickr30K dataset BIBREF0 is a collection of over 30,000 images with 5 crowdsourced descriptions each. It is commonly used to train and evaluate neural network models that generate image descriptions (e.g. BIBREF2 ). An untested assumption behind the dataset is that the descriptions are based on the images, and nothing else. Here are the authors (about the Flickr8K dataset, a subset of Flickr30K): “By asking people to describe the people, objects, scenes and activities that are shown in a picture without giving them any further information about the context in which the picture was taken, we were able to obtain conceptual descriptions that focus only on the information that can be obtained from the image alone.” BIBREF1 What this assumption overlooks is the amount of interpretation or recontextualization carried out by the annotators. Let us take a concrete example. Figure FIGREF1 shows an image from the Flickr30K dataset. This image comes with the five descriptions below. All but the first one contain information that cannot come from the image alone. Relevant parts are highlighted in bold: We need to understand that the descriptions in the Flickr30K dataset are subjective descriptions of events. This can be a good thing: the descriptions tell us what are the salient parts of each image to the average human annotator. So the two humans in Figure FIGREF1 are relevant, but the two soap dispensers are not. But subjectivity can also result in stereotypical descriptions, in this case suggesting that the male is more likely to be the manager, and the female is more likely to be the subordinate. rashtchian2010collecting do note that some descriptions are speculative in nature, which they say hurts the accuracy and the consistency of the descriptions. But the problem is not with the lack of consistency here. Quite the contrary: the problem is that stereotypes may be pervasive enough for the data to be consistently biased. And so language models trained on this data may propagate harmful stereotypes, such as the idea that women are less suited for leadership positions. This paper aims to give an overview of linguistic bias and unwarranted inferences resulting from stereotypes and prejudices. I will build on earlier work on linguistic bias in general BIBREF3 , providing examples from the Flickr30K data, and present a taxonomy of unwarranted inferences. Finally, I will discuss several methods to analyze the data in order to detect biases. Stereotype-driven descriptions Stereotypes are ideas about how other (groups of) people commonly behave and what they are likely to do. These ideas guide the way we talk about the world. I distinguish two kinds of verbal behavior that result from stereotypes: (i) linguistic bias, and (ii) unwarranted inferences. The former is discussed in more detail by beukeboom2014mechanisms, who defines linguistic bias as “a systematic asymmetry in word choice as a function of the social category to which the target belongs.” So this bias becomes visible through the distribution of terms used to describe entities in a particular category. Unwarranted inferences are the result of speculation about the image; here, the annotator goes beyond what can be glanced from the image and makes use of their knowledge and expectations about the world to provide an overly specific description. Such descriptions are directly identifiable as such, and in fact we have already seen four of them (descriptions 2–5) discussed earlier. Linguistic bias Generally speaking, people tend to use more concrete or specific language when they have to describe a person that does not meet their expectations. beukeboom2014mechanisms lists several linguistic `tools' that people use to mark individuals who deviate from the norm. I will mention two of them. [leftmargin=0cm] One well-studied example BIBREF4 , BIBREF5 is sexist language, where the sex of a person tends to be mentioned more frequently if their role or occupation is inconsistent with `traditional' gender roles (e.g. female surgeon, male nurse). Beukeboom also notes that adjectives are used to create “more narrow labels [or subtypes] for individuals who do not fit with general social category expectations” (p. 3). E.g. tough woman makes an exception to the `rule' that women aren't considered to be tough. can be used when prior beliefs about a particular social category are violated, e.g. The garbage man was not stupid. See also BIBREF6 . These examples are similar in that the speaker has to put in additional effort to mark the subject for being unusual. But they differ in what we can conclude about the speaker, especially in the context of the Flickr30K data. Negations are much more overtly displaying the annotator's prior beliefs. When one annotator writes that A little boy is eating pie without utensils (image 2659046789), this immediately reveals the annotator's normative beliefs about the world: pie should be eaten with utensils. But when another annotator talks about a girls basketball game (image 8245366095), this cannot be taken as an indication that the annotator is biased about the gender of basketball players; they might just be helpful by providing a detailed description. In section 3 I will discuss how to establish whether or not there is any bias in the data regarding the use of adjectives. Unwarranted inferences Unwarranted inferences are statements about the subject(s) of an image that go beyond what the visual data alone can tell us. They are based on additional assumptions about the world. After inspecting a subset of the Flickr30K data, I have grouped these inferences into six categories (image examples between parentheses): [leftmargin=0cm] We've seen an example of this in the introduction, where the `manager' was said to be talking about job performance and scolding [a worker] in a stern lecture (8063007). Many dark-skinned individuals are called African-American regardless of whether the picture has been taken in the USA or not (4280272). And people who look Asian are called Chinese (1434151732) or Japanese (4834664666). In image 4183120 (Figure FIGREF16 ), people sitting at a gym are said to be watching a game, even though there could be any sort of event going on. But since the location is so strongly associated with sports, crowdworkers readily make the assumption. Quite a few annotations focus on explaining the why of the situation. For example, in image 3963038375 a man is fastening his climbing harness in order to have some fun. And in an extreme case, one annotator writes about a picture of a dancing woman that the school is having a special event in order to show the american culture on how other cultures are dealt with in parties (3636329461). This is reminiscent of the Stereotypic Explanatory Bias BIBREF7 , which refers to “the tendency to provide relatively more explanations in descriptions of stereotype inconsistent, compared to consistent behavior” BIBREF6 . So in theory, odd or surprising situations should receive more explanations, since a description alone may not make enough sense in those cases, but it is beyond the scope of this paper to test whether or not the Flickr30K data suffers from the SEB. Older people with children around them are commonly seen as parents (5287405), small children as siblings (205842), men and women as lovers (4429660), groups of young people as friends (36979). Annotators will often guess the status or occupation of people in an image. Sometimes these guesses are relatively general (e.g. college-aged people being called students in image 36979), but other times these are very specific (e.g. a man in a workshop being called a graphics designer, 5867606). Detecting stereotype-driven descriptions In order to get an idea of the kinds of stereotype-driven descriptions that are in the Flickr30K dataset, I made a browser-based annotation tool that shows both the images and their associated descriptions. You can simply leaf through the images by clicking `Next' or `Random' until you find an interesting pattern. Ethnicity/race One interesting pattern is that the ethnicity/race of babies doesn't seem to be mentioned unless the baby is black or asian. In other words: white seems to be the default, and others seem to be marked. How can we tell whether or not the data is actually biased? We don't know whether or not an entity belongs to a particular social class (in this case: ethnic group) until it is marked as such. But we can approximate the proportion by looking at all the images where the annotators have used a marker (in this case: adjectives like black, white, asian), and for those images count how many descriptions (out of five) contain a marker. This gives us an upper bound that tells us how often ethnicity is indicated by the annotators. Note that this upper bound lies somewhere between 20% (one description) and 100% (5 descriptions). Figure TABREF22 presents count data for the ethnic marking of babies. It includes two false positives (talking about a white baby stroller rather than a white baby). In the Asian group there is an additional complication: sometimes the mother gets marked rather than the baby. E.g. An Asian woman holds a baby girl. I have counted these occurrences as well. The numbers in Table TABREF22 are striking: there seems to be a real, systematic difference in ethnicity marking between the groups. We can take one step further and look at all the 697 pictures with the word `baby' in it. If there turn out to be disproportionately many white babies, this strengthens the conclusion that the dataset is biased. I have manually categorized each of the baby images. There are 504 white, 66 asian, and 36 black babies. 73 images do not contain a baby, and 18 images do not fall into any of the other categories. While this does bring down the average number of times each category was marked, it also increases the contrast between white babies (who get marked in less than 1% of the images) and asian/black babies (who get marked much more often). A next step would be to see whether these observations also hold for other age groups, i.e. children and adults. INLINEFORM0 Other methods It may be difficult to spot patterns by just looking at a collection of images. Another method is to tag all descriptions with part-of-speech information, so that it becomes possible to see e.g. which adjectives are most commonly used for particular nouns. One method readers may find particularly useful is to leverage the structure of Flickr30K Entities BIBREF8 . This dataset enriches Flickr30K by adding coreference annotations, i.e. which phrase in each description refers to the same entity in the corresponding image. I have used this data to create a coreference graph by linking all phrases that refer to the same entity. Following this, I applied Louvain clustering BIBREF9 to the coreference graph, resulting in clusters of expressions that refer to similar entities. Looking at those clusters helps to get a sense of the enormous variation in referring expressions. To get an idea of the richness of this data, here is a small sample of the phrases used to describe beards (cluster 268): a scruffy beard; a thick beard; large white beard; a bubble beard; red facial hair; a braided beard; a flaming red beard. In this case, `red facial hair' really stands out as a description; why not choose the simpler `beard' instead? Discussion In the previous section, I have outlined several methods to manually detect stereotypes, biases, and odd phrases. Because there are many ways in which a phrase can be biased, it is difficult to automatically detect bias from the data. So how should we deal with stereotype-driven descriptions? Conclusion This paper provided a taxonomy of stereotype-driven descriptions in the Flickr30K dataset. I have divided these descriptions into two classes: linguistic bias and unwarranted inferences. The former corresponds to the annotators' choice of words when confronted with an image that may or may not match their stereotypical expectancies. The latter corresponds to the tendency of annotators to go beyond what the physical data can tell us, and expand their descriptions based on their past experiences and knowledge of the world. Acknowledging these phenomena is important, because on the one hand it helps us think about what is learnable from the data, and on the other hand it serves as a warning: if we train and evaluate language models on this data, we are effectively teaching them to be biased. I have also looked at methods to detect stereotype-driven descriptions, but due to the richness of language it is difficult to find an automated measure. Depending on whether your goal is production or interpretation, it may either be useful to suppress or to emphasize biases in human language. Finally, I have discussed stereotyping behavior as the addition of a contextual layer on top of a more basic description. This raises the question what kind of descriptions we would like our models to produce. Acknowledgments Thanks to Piek Vossen and Antske Fokkens for discussion, and to Desmond Elliott and an anonymous reviewer for comments on an earlier version of this paper. This research was supported by the Netherlands Organization for Scientific Research (NWO) via the Spinoza-prize awarded to Piek Vossen (SPI 30-673, 2014-2019). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which methods are considered to find examples of biases and unwarranted inferences?? Only give me the answer and do not output any other words.
Answer:
[ "spot patterns by just looking at a collection of images, tag all descriptions with part-of-speech information, I applied Louvain clustering", "Looking for adjectives marking the noun \"baby\" and also looking for most-common adjectives related to certain nouns using POS-tagging" ]
276
2,795
single_doc_qa
7253b4e13a82408bbf670475ecf72dc2
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Transport Aircraft for IAF - Page 67 - Bharat Rakshak Transport Aircraft for IAF Re: Transport Aircraft for IAF Postby abhik » 17 Nov 2014 05:55 +1, Air India recently sold their entire fleet of Boeing 777s. afaik the A330 MRTT does not make any structural mods or add anything internally in cargo or passenger cabin. it just relies on the intrinsic 110 tons of fuel. external refueling pods are added and internally the control station and cameras for the operator i guess. so its a easy conversion from a passenger layout to the AAR mode - mostly ripping out the passenger cabin of all extra stuff and retuning the FCS for any changes in COG. this should have been pursued years ago the IL78 adds a palletized drum tank system inside its cargo bay due to paucity of intrinsic fuel but it can be removed and a/c converted back to cargo hauling or send off to russia for Phalcon structural mods if we want it that way. they will however need to change engines to PS90 as they have the old engines http://www.airplane-pictures.net/images ... 7/5616.jpg the RAF is already gone that route in 2011 http://www.defensenews.com/article/2011 ... -Refuelers LONDON - Airbus Military has delivered the first of 12 A330-200 airliners due to be converted into in-flight refueling planes for the British Royal Air Force by Cobham Aviation Services. The aircraft, part of an order of 14 jets, will be modified with aerial refueling pods and other equipment at Cobham's newly refurbished facility in Bournemouth, England. The first two aircraft have already been converted by Airbus in Spain. The multirole tanker aircraft are being provided to the RAF under a private finance initiative service deal led by Airbus parent EADS. Seven of the planes will be operated full time by the RAF. The remainder will be available for lease in the third-party market, with the proviso that they can be returned to British military service to meet any surge in demand. All of the aircraft, to be known as the Voyager in RAF service, will be fitted with two wing-mounted refueling pods, while half the fleet will also be fitted for, but not necessarily with, a center-line mounted unit. The refueling units are being supplied by Cobham. The first aircraft will become operational in a passenger and freight transport role by the end of this year to start relieving pressure on the RAF's hard-pressed assets. Despite the increasing fragility of current RAF in-flight refueling operations, the new capability is not contracted to start being used in this role until 2015. All 14 Voyagers are scheduled to be available for RAF operations by the middle of the decade. The A330 will replace the increasingly ancient Tristar and VC-10 refuelers now in service. Push the 6 Il-476 from refueler to AEW duty. Phalcon them up Not sure if that is a good path to follow. For one they all should be sent to pasture in about 8 years. Then if the are to be phalconed up - the requires major structural changes. Not worth that cost. Whatever happened ot the two new ones that were supposed ot be ordered? the IL78 can be easily converted back to IL76 cargo hauling. only the fuel tank inside cargo bay needs removal...infact that was even mentioned in initial days as swing role fuel/cargo. Postby Cybaru » 17 Nov 2014 07:55 I am talking about the new il78 that we ordered recently in refueling role. Sorry for the mix up. They are the same platform, that I why i used 476 or 76 to identify it. 777 carries more internal fuel than the A330. We suck! From the KC-777 program. http://www.globalsecurity.org/military/ ... kc-777.htm "the KC-777 would be 209 feet long with a wingspan of 212 feet, 7 inches. That's the same size as the 777-200LR commercial jet. The KC-777 would be able to carry far more fuel, cargo and passengers than either the KC-767 or the Airbus A330 tanker. The KC-767 offers more operational flexibility, while the KC-777 would be better suited for long-range strategic missions in which more cargo needs to be delivered. The KC-777 would be able to carry more than 350,000 pounds (160,000 kilograms) of fuel and offload more than 220,000 pounds (100,000 kg) of it on a mission of 500 nautical miles (900 kilometers). On the other hand, the KC-767 can lift off with more than 200,000 pounds (90,000 kg) of fuel and offload more than 130,000 pounds (60,000 kg) in a similar mission. The KC-777 would be able to deliver 200 percent more fuel after flying 1,000 nautical miles than older Air Force KC-135s. The KC-777 could carry up to 37 pallets of cargo, compared to the 19 pallets for the KC-767." Postby Cosmo_R » 18 Nov 2014 04:31 Viv S wrote: From Ajai Shukla's article - HAL points out that, since each Avro flies barely 350 hours every year, most of them have a residual life of about 80,000 hours. In a request for information (RFI) released on August 15, HAL has proposed replacing the aircraft’s engines (Rolls Royce Dart) with “modern fuel efficient engines”. So, the IAF's Avros have a residual life of 228 years at the current rate of usage. Ain't life grand? At zero up time, it could reach infinity. Relax Cy. Kc777 has no client. Usaf is going with kc767 and almost everyone else with a330. We don't have the number of heavies and long missions of usaf else I would say convert an124. KC777 will be extremely expensive given the demand/backlog for the 777 and the 777x. Any buyer would have to virtually pay for the increase in capacity. I think the 767 production line is closed. so the proposed KC767 Boeing is supposed to deliver 18 by 2017..that can be managed from mothballed and cargo hauler airframes on the market. but to meet the final order of around 180 will they not have to open the production line unless such a huge number were available on the market? I do get the spider feel this program again will be cancelled in favour of a in-production plane like the 777X ? I wasn't suggesting we get the KC777. All I was doing was comparing what possibly the 777 could offload compared to A330. It carries 171000 liters of fuel versus 130000 liters that the A330 carries. If we had older 777s in stock, we could have quite easily converted them to this config. The cost to us would be miniscule just the refurbishing cost vs acquiring a new type. Singha wrote: I think the 767 production line is closed. so the proposed KC767 Boeing is supposed to deliver 18 by 2017..that can be managed from mothballed and cargo hauler airframes on the market. The Line is open, they have a backlog of around 50 (All Fed ex), with Fed Ex placing a small order this year. The Pegasus order is for all new builds, and so will the follow on order. The only reason for any nation to buy the 767 tanker is going to be because of the ability to hard bargain with Boeing given that the commercial future of the 767 is dead. This also allows a potential buyer to purchase cheap spares from the open market, or club its logistical and inventory purchase with that of the USAF. Other than that and perhaps availability (which would be doubtful once USAF pushes through a larger order) there is really no technical reason to purchase the this tanker over the A330 which by all accounts is a superior tanker in addition to being a much much better airliner in general. IAI is doing conversations for the 767 and its called the 767 MMTT http://www.iai.co.il/sip_storage/FILES/1/38471.pdf Cybaru wrote: I wasn't suggesting we get the KC777. All I was doing was comparing what possibly the 777 could offload compared to A330. It carries 171000 liters of fuel versus 130000 liters that the A330 carries. If we had older 777s in stock, we could have quite easily converted them to this config. The cost to us would be miniscule just the refurbishing cost vs acquiring a new type. The cost of converting a commercial airliner to a tanker, certifying it and running a full fledged test program is by no means small. There is absolutely no justification for that sort of cost over and above the capability that that A330 provides. If it were a certified and tested conversion, that would be a different matter. Postby Kartik » 21 Nov 2014 12:27 Cybaru wrote: Why? If the airframe can handle more flight hours, why not? because it is a very very old airframe as is. Maintenance spares won't be available easily even as of now, then imagine how it'll be 20-30 years from now.. and as things stood anyway, the HS-748 offered very little in terms of payload and range versus a C-295 class aircraft. The C-295 offers a very credible light transport, whereas the HS-748's role in the IAF was more akin to a transport trainer and for communication duties with little operational use. Having seen a dozen or so HS-748s parked at Vadodara airport all through my childhood, I never once saw one in the air. They just seemed to be stored out in the open. Upon asking an IAF transport pilot who was my friend's father, he remarked "zyaada kaam ke nahi hain yeh". Why would you expend more capital on what is essentially an obsolete airframe, even if theoretically it had not yet reached its service life? You'd have to re-engine it, put new avionics on board and even that wouldn't suffice for para dropping requirements..it was operationally never suitable for para dropping, which is an important mission for transport aircraft and had deficiencies in hot and high climes as well. Unfortunately, the 748 was never meant to be a military transport. At the request of IAF, its door was enlarged to enable larger cargo items to be loaded and to allow para dropping without hitting the tail plane. However, to load a jeep in it, a 30-ft long ramp was required. The jeep would drive in and insert its front wheels into the aircraft. Then it had to be manually lifted and turned to get it in. Unloading it was just as difficult. Para dropping of troops or cargo even from the aircraft with the enlarged door was considered too dangerous with the risk of hitting the tail plane. The aircraft's performance at hot and high airfields was hopelessly inadequate. Eventually IAF acquired the tail-loading An-32s which were powered specifically for IAF's need for operating in the Himalayas. BRF article -Avro in IAF service Now unless you want to overcome all these through a costly, time consuming engineering re-design program, that too without access to original documents since this airplane was designed in the 1960s, there is no question of keeping them going for another 40 years. By which time the original design would be over 80 years old and with no one on earth but the IAF as an operator and HAL as the agency supporting it. Hardly a situation anyone would want. abhik wrote: +1, Air India recently sold their entire fleet of Boeing 777s. Only 5 of the Boeing 777-200LR, to Etihad Airways, which IMO was a bad decision..they could have reconfigured the airplanes with just 2 classes and continued to fly them to the US, non-stop. The remaining 3 777-200LR were offered for lease but are still a part of AI's fleet since they didn't find any takers. This particular model hardly sold much and was developed for ultra-long range flights..it was the least successful 777 model and clearly AI goofed up on the configuration by going for these in place of the 300ER. The economics however didn't make too much sense for AI eventually. there are 13 777-300ER as a part of their fleet ahd their economics is much better. Govt. to decide tomorrow on whether to go ahead and allow the IAF to verify the technical details of the C-295 bid by Tata-Airbus instead of scrapping the tender due to single vendor situation. The government will decide on Saturday whether to press ahead with the Rs 13,000 crore mega project for the private sector to supply 56 medium transport aircraft to the IAF despite only a single bidder, the Tata-Airbus consortium, being in the fray. Though the defence acquisitions council (DAC) chaired by Manohar Parrikar will take the final decision, MoD sources on Tuesday said the "emerging dominant view" is that green signal should be given to the crucial project designed to promote Indian private sector's entry into the domestic aerospace arena with foreign collaboration. "The Tata-Airbus technical and commercial bid is a credible offer submitted in a competitive environment. The other seven contenders backed out for one reason or the other," said a source. IAF has now sought the clearance of the DAC -- the first such meeting to be chaired by Parrikar after becoming defence minister on November 10 -- to begin technical evaluation of the C-295 aircraft offered by Airbus Defence & Space and Tata Advanced Systems. Though it has become a single-vendor situation, the DAC can approve it if it wants as per existing procurement procedures. Of the eight foreign aviation majors that got the global tender, American Boeing and Lockheed-Martin as well as Brazilian Embraer said they did not manufacture the class of aircraft being sought by IAF. Refusing to take part in the tender, Russian Rosoboronexport said it wanted a fresh design and development project. Antonov of Ukraine wanted yet another extension of the bid submission deadline due to the ongoing conflict in Crimea. Swedish Saab said it had shut down its assembly line for such aircraft. Then, Alenia Aermacchi was linked to Italian conglomerate Finmeccanica, which has been slapped with "a partial ban" after the infamous VVIP helicopter scandal. "All this left only the European consortium Airbus. The DAC will have to take a call since re-tendering may lead to the same situation," said the source. Incidentally, it was the Modi government's first DAC in July -- then headed by Arun Jaitley - which revived the Avro replacement project after it was put on hold by the UPA-2 regime last year due to strong opposition from the powerful PSU lobby and ministers like Praful Patel, as reported by TOI earlier. Apart from the critical need to encourage the private sector to enter defence production in a big way, especially in the aerospace arena where Hindustan Aeronautics enjoys a monopoly, its felt the defence PSU's order books are already overflowing with projects. Fingers crossed. Hopefully sense will prevail. Why was lr got? Er is capable of Dubai to sfo nonstop. Lr is overkill unless we want Delhi to Peru . Singha wrote: Why was lr got? Er is capable of Dubai to sfo nonstop. they wanted it for non-stop routes from India to the west coast of the US. But with fuel prices going higher and with the lower seat count on the 777-200LR, the seat mile costs grew too high. A 3 class configuration only made matters worse. A higher density configuration with more economy class seats and just 12-15 Business class seats would have been better perhaps, especially if they didn't have very high First Class load factors. LR and ER is better if you want to have a better payload down below for long haul. Ultimately, the best bet is going to come form the 787's that take a fewer people (so you can do the longer routes) with still a competitive CASM, and the B and F class folks will pay good money for newer aircraft. Postby Kartik » 04 Dec 2014 12:55 Lets see if there is any forward movement on the stalled MTA project once Putin arrives in New Delhi Major defence deals to be signed during Putin-Modi summit In this connection, it is expected that during the summit, Russia and India may ultimately resolve several long-delayed agreements on military-technical cooperation projects between the two countries and sign them finally for their implementation. These agreements, above all, include joint Fifth Generation Fighter Aircraft (FGFA) project and joint development of Multi-role Transport Aircraft (MTA). A final deal on FGFA for production has been delayed because the Indian Air Force (IAF) did not approve the design and work-share. Now Russia has reportedly agreed that the jet would be a two-seat design, not a one-seater. India’s work-share would also be increased from18 percent to 25 percent, and even up to 40-50 percent in the near future, in view of the steady development of the Indian aviation industry. Defence and SecurityAccording to the agreement, India’s stealth air-to-air missile “Astra” along with Indo-Russian BrahMos supersonic cruise missile will be mounted on the FGFA. The preliminary design agreement on FGFA had been signed in 2010 between Indian HAL and Russian Sukhoi Design Bureau to build the jet for the use by both countries. The final design contract was to be signed in July-August 2012. But the deadline has already passed. According to the Indian media reports, under the programme, India is expected to build 200 fighter jets at the cost of $30 billion. FGFA is not the only Indo-Russia joint project. The two countries also signed an agreement on the joint development of MTA in 2007, based on Il-214 Russian plane. The cost of the $600 million project is being equally shared by the two countries. The MTA, when developed, will have ready market for 205 aircraft - 45 for the Indian Air Force, 100 for the Russian Air Force, and 60 more for exporting to friendly countries. The international market for MTA is estimated at 390 planes. Under the agreement, thirty percent of the annual production of planes could be exported to third countries. The MTA was expected to go in service with the Russian and Indian Air Forces in 2015. But the project faced a number of problems, delaying the development of the MTA. The project got into rough weather after India felt there was nothing much for Indian engineers and scientists to do in the design and development of the MTA. However, all the issues related to the project were resolved with the Russians when the HAL undertook to carry out design and development of its work-share of MTA at Aircraft R&D Centre at Bangalore. Russian Ilyushin Design Bureau and the Irkut Corporation and HAL are participating in the project. The first flight is expected to take place in 2017-18. The MTA would replace the AN- 32 aircraft being used by the IAF. It will be used for both cargo and troop transportation, para-drop and air drop of supplies, including low-altitude parachute extraction system. BrahMos missile exports a challenging proposition Another key deal expected to be signed during the summit, is for the development of “BrahMos mini missile” by the Indo-Russian joint venture BrahMos Aerospace which manufactures supersonic cruise missile. BrahMos’ new CEO Sudhir Mishra recently said he was hopeful that a deal to develop the mini version of the missile will be signed during Putin’s summit with Modi. “We are hoping to sign a tripartite agreement between DRDO, NPOM lab and BrahMos Aerospace during the planned visit of Russian President in December,” Mishra said. He said that the new missile will have a speed of 3.5 mach and carry a payload of 300 km up to a range of 290 km. In size, it will be about half of the present missile, which is around 10 metres long. The missile can be integrated with different platforms, including submarines and FGFA. It is planned to be inducted into service by 2017. Modi-Abbott to upgrade defence ties A new dimension: In a first, India and Australia will also set up a mechanism to discuss “synergies in integrating defence system”, including research and development cooperation on integrating defence equipment that both countries currently purchase, for example, U.S’s C-17 Globemaster III, according to officials. ^^That report about MTA is fishy. First it says that India has nothing to learn from an existing design (duh) and then says the issue has been resolved. How? Next it says India's need is 45 planes to replace over 100 An-32s. It also speculates about the export potential which may be nonexistent unless we sell it for peanuts. This is a scam which only aims to create screwdriver jobs at HAL, stall any attempt to introduce private players into the aviation market and continue the Russian gravy train. My fear is the Russkies have our testiments in a firm grip with key components of Brahmos, nuke subs, Su30mki etc and we may be jerked around. (They need to be more definitive about "MTA" - Multirole vs. Medium) The Indians had not selected an engine (among other things) for the MTA with the Russians. Perhaps that has been resolved now. On export numbers, IIRC, it was the responsibility of Rosoboronexport. ????? Kartik wrote: The MTA would replace the AN- 32 aircraft being used by the IAF. It will be used for both cargo and troop transportation, para-drop and air drop of supplies, including low-altitude parachute extraction system. Pardon my ignorance. The Avro and An-32 have different upgrade paths. How are the replacements for these venerable aircraft different in terms of use cases in IAF. Cannot one platform replace both these types? (Either MTA or C-295) In this case, I feel they should have just gone with screwdrivergiri (production tech) and got to market first. There is no jet-powered transporter in this range! Just license produce the IL-214 with the PD-14M, glass cockpit and a state-of-the-art COTS avionics computer. In my view, it was a low hanging fruit, which they completely messed up! They could have learnt on how to adopt the plane for the 160-200 seater. indranilroy wrote: They could have learnt on how to adopt the plane for the 160-200 seater. Yes, the MTA project should fold the Avro, An-32 and the regional transport role and become a conversion project rather a development one. The driving numbers will come from the regional transport (thousands in India itself) rather than the Avro or medium transport roles (max 300 between them). This changes the ball game and introduces all kinds of possibilities. But I'm pretty sure that the Il-214/MTA is not the way to go because it will take a decade or more to arrive. A good possibility was another Antonov, the An-148 but it has some mechanical glitches apparently besides being bogged down in the Ukraine mess. Maybe the Russians can "relocate" the aircraft to Russia? The other possibility is the BAe-146 which is ironically another Avro. We should remember that both the HS-748 "Avro" and An-32 were regional airliners that were converted to military use, not the other way around. HAL or a private firm will pick up a lot of experience in the conversion process itself. The Sukhoi Superjet is already in production/orders,with over 100+ for Russian and intl. customers. It is ideal for regional transport,perfect for flights to smaller Tier-2/3 cities from metros. If we really want a regional jet this is the fastest way toi go,we can set up a manufacturing unti here for the same at an HAL unit. Postby shaun » 05 Dec 2014 15:24 Its an international projects, with components outsourced from different international vendors . Over 30 foreign partnership companies are involved in the project and partly financed by Italy. Sukhoi is good for passenger use but wont be suitable for military, rough field use. The shoulder wing jets like the An-148 have slower speeds and better ground clearance. The Bae-146 was usedby Druk Air in Bhutan so it should do OK in the ALGs. If we don't fold our requirements then we should go with something like the Superjet which we will at least be able to make in India and also modify to stretched versions. Unless we have a clear path to operational clearance within 10 yrs for the RTA project vetted by our top industrial houses, it is pie-in-the-sky and should be dropped. The RTA will be big enough to keep 2-3 factories humming and leapfrog our capabilities. If we don't get our act together almost immediately, we will miss the boat, just like our trainer fiascos. I don't think Superjet fits into our scheme of things. We should think as a country and see to it that our programs don't trample on each other. First, the more certain ones: 1. Mahindras NM5 and Airvans can care of the low-cost but sturdy 5,8,10 and 18-seater section. 2. Saras had such great potential for being the high performance 14-18 seater. But I have almost given up on it. This section will most probably be taken up by the Tata-built Do-228 NG. 3. We should standardize the C-295 as the Avro/An-32 replacement and create a 70-80 seater variant out of it. And then the more wishful ones: 1. If the RTA is going to be a jet, then make it a 100-130 seater. I don't expect the first prototype to take the sky before 2025. I feel it is too big of a jump where we don't even have a base. With LCA, at least we were at least license producing other fighters. 4. Building on the IL-214, the MTA was on a more sure footing. But, I can't see how the first prototype can to take to the sky before 2019(more than 10 years since MTAL was formed)! If the transport plane materializes, then one can imagine making a civilian 150-200 seater version of the same. But this program needs a push. Will Putin's visit be able to galvanize this into the next symbol of Indo-Russian cooperation. Probably not! Postby GeorgeWelch » 12 Dec 2014 23:39 http://www.ctvnews.ca/canada/defence-de ... -1.2144472 The Defence Department intends to purchase a Boeing C-17 Globemaster III, a large military transport plane that comes with a price tag of just under $200 million, CTV News has learned It's difficult to get a good count, but by some sources, if this and the 4 Australia planes go through, there will only be 5 left. X-Posting from FGFA thread. Despite Putin’s visit, two pacts on military aircraft still in doldrums President Vladimir Putin may have come and gone but stalemate largely persists over two key long-pending India-Russian defence projects, the fifth-generation fighter aircraft (FGFA) and military multirole transport aircraft (MTA). The deadlock over the MTA, which were initially envisaged to gradually replace IAF's ageing fleet of the medium-lift AN-32 aircraft, seems to be much more serious. India now wants to ascertain the cost viability of the twin-engine transport aircraft in comparison to similar planes available in the market. There are also questions about the MTA's "predicted timelines for delivery" as well as its failure to meet the high-altitude requirements, which need to be answered before India even thinks of inking the full-scale contract for the project, said sources. Postby Gyan » 13 Dec 2014 12:29 indranilroy wrote: I don't think Superjet fits into our scheme of things. We should think as a country and see to it that our programs don't trample on each other. 1. Mahindras NM5 and Airvans can care of the low-cost but sturdy 5,8,10 and 18-seater section. Righto 2. Saras had such great potential for being the high performance 14-18 seater. But I have almost given up on it. This section will most probably be taken up by the Tata-built Do-228 NG. We need future extended variants of presurrized aircraft like 30 seater Saras and say 30 seater unpressurized Do-328 NG. 3. We should standardize the C-295 as the Avro/An-32 replacement and create a Civilian turboprop pressurized cabin 70-80 seater variant out of it. 1. If the RTA is going to be a jet, then make it a 100-130 seater. Agreeeeeed I don't expect the first prototype to take the sky before 2025. I feel it is too big of a jump where we don't even have a base. With LCA, at least we were at least license producing other fighters. Though I think that we should participate in Russian MS-21 and also the wide body follow on. 4. Building on the IL-214, the MTA was on a more sure footing. But, I can't see how the first prototype can to take to the sky before 2019(more than 10 years since MTAL was formed)! If the transport plane materializes, then one can imagine making a civilian 150-200 seater version of the same. Though I think that we should participate in Russian MS-21 and also the wide body follow on. But this program needs a push. Will Putin's visit be able to galvanize this into the next symbol of Indo-Russian cooperation. Probably not! Absence of any specifics on Sukhoi Superjet, MS-21, Wide body aircraft, Mi-38, MRTA, FGFA, even after Putin visit is very disappointing. FlightGlobal- Boeing sitting on 8 unsold C-17s By: Dan ParsonsWashington DCSource: Flightglobal.com This story is sourced from Flightglobal.com 12 hours agoBoeing has sold two more C-17 transports to an undisclosed customer, but it will likely end the year with eight unsold white tails. There are 10 Boeing C-17 airlifters in various stages of assembly at the company’s Long Beach, California, production facility. Two of the aircraft are spoken for by an unnamed customer, Boeing says. Boeing is trying to sell off the other eight white tails, which will be the last produced before the factory is shuttered sometime in the summer of 2015. The 279th – and final – C-17 fuselage will be mated to its wings in January or February, programme spokeswoman Tiffany Pitts tells Flightglobal. The operation is California’s last remaining aircraft production line and the lone widebody military aircraft production line in the USA, according to Boeing. At least two countries – Australia and Canada – have publicly announced an intention to purchase a C-17, though neither factor into Boeing’s future planning, Pitts says. Until contracts are finalised, the number available remains eight, she says. The Royal Canadian Air Force already has four C-17As, according to Flightglobal’s World Air Forces 2014 directory. Canadian news outlets reported earlier in December that the air force would buy one C-17 with money left over at the end of 2015. Australia is further along with its bid to purchase C-17s. The US Defense Security Cooperation Agency in November announced Australia was approved to buy up to four C-17s and support equipment for $1.6 billion. Boeing has plans to store any unsold C-17s following closure of its production line, Pitts says. “I’m hoping they all will be sold before then, but we’ve had plans in place for a very long time to store and maintain the aircraft if that doesn’t happen,” she says. the IAF will need to factor in the demand vs availability of C-17s and stock up with a follow-on order quickly. The initial plan to have 16 C-17s may not fructify, considering that there are just 8 left now, with Australia having announced plans to buy 4 more. why are they closing the line if it has demands ??? Real estate sales tactics probably. Buy now last 8 3bhk flats Saar. krishnan wrote: why are they closing the line if it has demands ??? It requires 3 years lead time to order raw materials/parts from all of its sub-vendors. All current firm orders have been fulfilled, and no new orders have come. Anticipating a need for a few more aircrafts, they produced 10 extra (self-funded) units before production winded down. Bottom line is they don't make money keeping an idle plant around with all its employees and infrastructure. At most what they will likely do is keep a limited infrastructure around for a few more years in case a bunch of new orders come. They can then see if it makes business sense to re-open the plant. Postby Aditya_V » 17 Dec 2014 12:19 Wish this can be brought to the notice of Journos/ Poster when slamming LCA/ Arjun and other indigenous projects. If there are no orders there will be no efficiency. Dec 10, 2014 :: Russia launches Il-76MDM upgrade programme Russia's Ilyushin has started to upgrade a first Russian Air Force (VVS) Ilyushin Il-76MD 'Candid' military transport aircraft to Il-76MDM standard, company officials have told IHS Jane's . The main features of the upgrade include refurbished engines and upgraded avionics. The modernisation is being conducted at the VVS's Military Transport Aviation (MTA) maintenance facility based at the Ilyushin division in Zhukovsky city near Moscow. A senior Ilyushin official told IHS Jane's that the upgrade of the first aircraft will be finished in 18 months. Subsequent aircraft will take less time to complete the process, however. When the modernisation is finished the initial Il-76MDM will undergo state trials. The upgrade process for subsequent aircraft will begin when the trials programme is completed. IHS Jane's was previously told by a VVS senior official that the modernisation of 41 MTA Il-76MDs is planned by 2020. While the Il-76MDM upgrade retains the old D-30KP engine (compared with the PS-90A engine equipping the new Il-76MD-90A/Il-476), the modernisation effort should match the aircraft's onboard electronics with those of the newbuild Il-76MD-90A. This and other efforts mean the cost of modernising the Il-76MD to Il-76MDM is only a third of that of a newbuild Il-76MD-90A. The existing D-30KP engines are to be enhanced to increase their service life. The overall aircraft's service life will be extended by 15 years. The upgrade works are planned to be conducted in an aviation repair factory or in the MTA's aircraft maintenance facility. As a result, the Ulyanovsk-based Aviastar-SP plant, which is building the Il-76MD-90A, is not involved in the Il-76MD to Il-76MDM modernisation programme. Users browsing this forum: Jaeger, Manish_Sharma, rajkumar, VikramA and 43 guests Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Besides the Boeing C-17, what other transport aircraft is the IAF considering for acquisition? Only give me the answer and do not output any other words.
Answer:
[ "The IAF is considering the acquisition of the Airbus A330 MRTT (Multi-Role Tanker Transport) besides the Boeing C-17." ]
276
7,606
single_doc_qa
fbc98f4512f1434b819bf8a6d225a3a1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction How humans process language has become increasingly relevant in natural language processing since physiological data during language understanding is more accessible and recorded with less effort. In this work, we focus on eye-tracking and electroencephalography (EEG) recordings to capture the reading process. On one hand, eye movement data provides millisecond-accurate records about where humans look when they are reading, and is highly correlated with the cognitive load associated with different stages of text processing. On the other hand, EEG records electrical brain activity across the scalp and is a direct measure of physiological processes, including language processing. The combination of both measurement methods enables us to study the language understanding process in a more natural setting, where participants read full sentences at a time, in their own speed. Eye-tracking then permits us to define exact word boundaries in the timeline of a subject reading a sentence, allowing the extraction of brain activity signals for each word. Human cognitive language processing data is immensely useful for NLP: Not only can it be leveraged to improve NLP applications (e.g. barrett2016weakly for part-of-speech tagging or klerke2016improving for sentence compression), but also to evaluate state-of-the-art machine learning systems. For example, hollenstein2019cognival evaluate word embeddings, or schwartz2019inducing fine-tune language models with brain-relevant bias. Additionally, the availability of labelled data plays a crucial role in all supervised machine learning applications. Physiological data can be used to understand and improve the labelling process (e.g. tokunaga2017eye), and, for instance, to build cost models for active learning scenarios BIBREF0. Is it possible to replace this expensive manual work with models trained on physiological activity data recorded from humans while reading? That is to say, can we find and extract relevant aspects of text understanding and annotation directly from the source, i.e. eye-tracking and brain activity signals during reading? Motivated by these questions and our previously released dataset, ZuCo 1.0 BIBREF1, we developed this new corpus, where we specifically aim to collect recordings during natural reading as well as during annotation. We provide the first dataset of simultaneous eye movement and brain activity recordings to analyze and compare normal reading to task-specific reading during annotation. The Zurich Cognitive Language Processing Corpus (ZuCo) 2.0, including raw and preprocessed eye-tracking and electroencephalography (EEG) data of 18 subjects, as well as the recording and preprocessing scripts, is publicly available at https://osf.io/2urht/. It contains physiological data of each subject reading 739 English sentences from Wikipedia (see example in Figure FIGREF1). We want to highlight the re-use potential of this data. In addition to the psycholinguistic motivation, this corpus is especially tailored for training and evaluating machine learning algorithms for NLP purposes. We conduct a detailed technical validation of the data as proof of the quality of the recordings. Related Work Some eye-tracking corpora of natural reading (e.g. the Dundee BIBREF2, Provo BIBREF3 and GECO corpus BIBREF4), and a few EEG corpora (for example, the UCL corpus BIBREF5) are available. It has been shown that this type of cognitive processing data is useful for improving and evaluating NLP methods (e.g. barrett2018sequence,hollenstein2019cognival, hale2018finding). However, before the Zurich Cognitive Language Processing Corpus (ZuCo 1.0), there was no available data for simultaneous eye-tracking and EEG recordings of natural reading. dimigen2011coregistration studied the linguistic effects of eye movements and EEG co-registration in natural reading and showed that they accurately represent lexical processing. Moreover, the simultaneous recordings are crucial to extract word-level brain activity signals. While the above mentioned studies analyze and leverage natural reading, some NLP work has used eye-tracking during annotation (but, as of yet, not EEG data). mishra2016predicting and joshi2014measuring recorded eye-tracking during binary sentiment annotation (positive/negative). This data was used to determine the annotation complexity of the text passages based on eye movement metrics and for sarcasm detection BIBREF6. Moreover, eye-tracking has been used to analyze the word sense annotation process in Hindi BIBREF7, named entity annotation in Japanese BIBREF8, and to leverage annotator gaze behaviour for coreference resolution BIBREF9. Finally, tomanek2010cognitive used eye-tracking data during entity annotation to build a cost model for active learning. However, until now there is no available data or research that analyzes the differences in the human processing of normal reading versus annotation. Related Work ::: ZuCo1.0 In previous work, we recorded a first dataset of simultaneous eye-tracking and EEG during natural reading BIBREF1. ZuCo 1.0 consists of three reading tasks, two of which contain very similar reading material and experiments as presented in the current work. However, the main difference and reason for recording ZuCo 2.0, consists in the experiment procedure. For ZuCo 1.0 the normal reading and task-specific reading paradigms were recorded in different sessions on different days. Therefore, the recorded data is not appropriate as a means of comparison between natural reading and annotation, since the differences in the brain activity data might result mostly from the different sessions due to the sensitivity of EEG. This, and extending the dataset with more sentences and more subjects, were the main factors for recording the current corpus. We purposefully maintained an overlap of some sentences between both datasets to allow additional analyses (details are described in Section SECREF7). Corpus Construction In this section we describe the contents and experimental design of the ZuCo 2.0 corpus. Corpus Construction ::: Participants We recorded data from 19 participants and discarded the data of one of them due to technical difficulties with the eye-tracking calibration. Hence, we share the data of 18 participants. All participants are healthy adults (mean age = 34 (SD=8.3), 10 females). Their native language is English, originating from Australia, Canada, UK, USA or South Africa. Two participants are left-handed and three participants wear glasses for reading. Details on subject demographics can be found in Table TABREF4. All participants gave written consent for their participation and the re-use of the data prior to the start of the experiments. The study was approved by the Ethics Commission of the University of Zurich. Corpus Construction ::: Reading materials During the recording session, the participants read 739 sentences that were selected from the Wikipedia corpus provided by culotta2006integrating. This corpus was chosen because it provides annotations of semantic relations. We included seven of the originally defined relation types: political_affiliation, education, founder, wife/husband, job_title, nationality, and employer. The sentences were chosen in the same length range as ZuCo 1.0, and with similar Flesch reading ease scores. The dataset statistics are shown in Table TABREF2. Of the 739 sentences, the participants read 349 sentences in a normal reading paradigm, and 390 sentences in a task-specific reading paradigm, in which they had to determine whether a certain relation type occurred in the sentence or not. Table TABREF3 shows the distribution of the different relation types in the sentences of the task-specific annotation paradigm. Purposefully, there are 63 duplicates between the normal reading and the task-specific sentences (8% of all sentences). The intention of these duplicate sentences is to provide a set of sentences read twice by all participants with a different task in mind. Hence, this enables the comparison of eye-tracking and brain activity data when reading normally and when annotating specific relations (see examples in Section SECREF4). Furthermore, there is also an overlap in the sentences between ZuCo 1.0 and ZuCo 2.0. 100 normal reading and 85 task-specific sentences recorded for this dataset were already recorded in ZuCo 1.0. This allows for comparisons between the different recording procedures (i.e. session-specific effects) and between more participants (subject-specific effects). Corpus Construction ::: Experimental design As mentioned above, we recorded two different reading tasks for the ZuCo 2.0 dataset. During both tasks the participants were able to read in their own speed, using a control pad to move to the next sentence and to answer the control questions, which allowed for natural reading. Since each subject reads at their own personal pace, the reading speed between varies between subjects. Table TABREF4 shows the average reading speed for each task, i.e. the average number of seconds a subject spends per sentence before switching to the next one. All 739 sentences were recorded in a single session for each participant. The duration of the recording sessions was between 100 and 180 minutes, depending on the time required to set up and calibrate the devices, and the personal reading speed of the participants. We recorded 14 blocks of approx. 50 sentences, alternating between tasks: 50 sentences of normal reading, followed by 50 sentences of task-specific reading. The order of blocks and sentences within blocks was identical for all subjects. Each sentence block was preceded by a practice round of three sentences. Corpus Construction ::: Experimental design ::: Normal reading (NR) In the first task, participants were instructed to read the sentences naturally, without any specific task other than comprehension. Participants were told to read the sentences normally without any special instructions. Figure FIGREF8 (left) shows an example sentence as it was depicted on the screen during recording. As shown in Figure FIGREF8 (middle), the control condition for this task consisted of single-choice questions about the content of the previous sentence. 12% of randomly selected sentences were followed by such a comprehension question with three answer options on a separate screen. Corpus Construction ::: Experimental design ::: Task-specific reading (TSR) In the second task, the participants were instructed to search for a specific relation in each sentence they read. Instead of comprehension questions, the participants had to decide for each sentence whether it contains the relation or not, i.e. they were actively annotating each sentence. Figure FIGREF8 (right) shows an example screen for this task. 17% of the sentences did not include the relation type and were used as control conditions. All sentences within one block involved the same relation type. The blocks started with a practice round, which described the relation and was followed by three sample sentences, so that the participants would be familiar with the respective relation type. Corpus Construction ::: Linguistic assessment As a linguistic assessment, the vocabulary and language proficiency of the participants was tested with the LexTALE test (Lexical Test for Advanced Learners of English, lemhofer2012introducing). This is an unspeeded lexical decision task designed for intermediate to highly proficient language users. The average LexTALE score over all participants was 88.54%. Moreover, we also report the scores the participants achieved with their answers to the reading comprehension control questions and their relation annotations. The detailed scores for all participants are also presented in Table TABREF4. Corpus Construction ::: Data acquisition Data acquisition took place in a sound-attenuated and dark experiment room. Participants were seated at a distance of 68cm from a 24-inch monitor with a resolution of 800x600 pixels. A stable head position was ensured via a chin rest. Participants were instructed to stay as still as possible during the tasks to avoid motor EEG artifacts. Participants were also offered snacks and water during the breaks and were encouraged to rest. All sentences were presented at the same position on the screen and could span multiple lines. The sentences were presented in black on a light grey background with font size 20-point Arial, resulting in a letter height of 0.8 mm. The experiment was programmed in MATLAB 2016b BIBREF10, using PsychToolbox BIBREF11. Participants completed the tasks sitting alone in the room, while two research assistants were monitoring their progress in the adjoining room. All recording scripts including detailed participant instructions are available alongside the data. Corpus Construction ::: Data acquisition ::: Eye-tracking acquisition Eye position and pupil size were recorded with an infrared video-based eye tracker (EyeLink 1000 Plus, SR Research) at a sampling rate of 500 Hz. The eye tracker was calibrated with a 9-point grid at the beginning of the session and re-validated before each block of sentences. Corpus Construction ::: Data acquisition ::: EEG acquisition High-density EEG data were recorded at a sampling rate of 500 Hz with a bandpass of 0.1 to 100 Hz, using a 128-channel EEG Geodesic Hydrocel system (Electrical Geodesics). The recording reference was set at electrode Cz. The head circumference of each participant was measured to select an appropriately sized EEG net. To ensure good contact, the impedance of each electrode was checked prior to recording, and was kept below 40 kOhm. Electrode impedance levels were checked after every third block of 50 sentences (approx. every 30 mins) and reduced if necessary. Corpus Construction ::: Preprocessing and feature extraction ::: Eye-tracking The eye-tracking data consists of (x,y) gaze location entries for all individual fixations (Figure FIGREF1b). Coordinates were given in pixels with respect to the monitor coordinates (the upper left corner of the screen was (0,0) and down/right was positive). We provide this raw data as well as various engineered eye-tracking features. For this feature extraction only fixations within the boundaries of each displayed word were extracted. Data points distinctly not associated with reading (minimum distance of 50 pixels to the text) were excluded. Additionally, fixations shorter than 100 ms were excluded from the analyses, because these are unlikely to reflect fixations relevant for reading BIBREF12. On the basis of the GECO and ZuCo 1.0 corpora, we extracted the following features: (i) gaze duration (GD), the sum of all fixations on the current word in the first-pass reading before the eye moves out of the word; (ii) total reading time (TRT), the sum of all fixation durations on the current word, including regressions; (iii) first fixation duration (FFD), the duration of the first fixation on the prevailing word; (iv) single fixation duration (SFD), the duration of the first and only fixation on the current word; and (v) go-past time (GPT), the sum of all fixations prior to progressing to the right of the current word, including regressions to previous words that originated from the current word. For each of these eye-tracking features we additionally computed the pupil size. Furthermore, we extracted the number of fixations and mean pupil size for each word and sentence. Corpus Construction ::: Preprocessing and feature extraction ::: EEG The EEG data shared in this project are available as raw data, but also preprocessed with Automagic (version 1.4.6, pedroni2019automagic), a tool for automatic EEG data cleaning and validation. 105 EEG channels (i.e. electrodes) were used from the scalp recordings. 9 EOG channels were used for artifact removal and additional 14 channels lying mainly on the neck and face were discarded before data analysis. Bad channels were identified and interpolated. We used the Multiple Artifact Rejection Algorithm (MARA), a supervised machine learning algorithm that evaluates ICA components, for automatic artifact rejection. MARA has been trained on manual component classifications, and thus captures a wide range of artifacts. MARA is especially effective at detecting and removing eye and muscle artifact components. The effect of this preprocessing can be seen in Figure FIGREF1d. After preprocessing, we synchronized the EEG and eye-tracking data to enable EEG analyses time-locked to the onsets of fixations. To compute oscillatory power measures, we band-pass filtered the continuous EEG signals across an entire reading task for five different frequency bands resulting in a time-series for each frequency band. The independent frequency bands were determined as follows: theta$_1$ (4–6 Hz), theta$_2$ (6.5–8 Hz), alpha$_1$ (8.5–10 Hz), alpha$_2$ (10.5–13 Hz), beta$_1$ (13.5–18 Hz), beta$_2$ (18.5–30 Hz), gamma$_1$ (30.5–40 Hz), and gamma$_2$ (40–49.5 Hz). We then applied a Hilbert transformation to each of these time-series. We specifically chose the Hilbert transformation to maintain the temporal information of the amplitude of the frequency bands, to enable the power of the different frequencies for time segments defined through the fixations from the eye-tracking recording. Thus, for each eye-tracking feature we computed the corresponding EEG feature in each frequency band. Furthermore, we extracted sentence-level EEG features by calculating the power in each frequency band, and additionally, the difference of the power spectra between frontal left and right homologue electrodes pairs. For each eye-tracking based EEG feature, all channels were subject to an artifact rejection criterion of $90\mu V$ to exclude trials with transient noise. Data Validation The aim of the technical validation of the data is to guarantee good recording quality and to replicate findings of previous studies investigating co-registration of EEG and eye movement data during natural reading tasks (e.g. dimigen2011coregistration). We also compare the results to ZuCo 1.0 BIBREF1, which allows a more direct comparison due to the analogous recording procedure. Data Validation ::: Eye-tracking We validated the recorded eye-tracking data by analyzing the fixations made by all subjects through their reading speed and omission rate on sentence level. The omission rate is defined as the percentage of words that is not fixated in a sentence. Figure FIGREF10 (middle) shows the mean reading speed over all subjects, measured in seconds per sentence and Figure FIGREF10 (right) shows the mean omission rates aggregated over all subjects for each task. Clearly, the participants made less fixations during the task-specific reading, which lead to faster reading speed. Moreover, we corroborated these sentence-level metrics by visualizing the skipping proportion on word level (Figure FIGREF13). The skipping proportion is the average rate of words being skipped (i.e. not being fixated) in a sentence. As expected, this also increases in the task-specific reading. Although the reading material is from the same source and of the same length range (see Figure FIGREF10 (left)), in the first task (NR) passive reading was recorded, while in the second task (TSR) the subjects had to annotate a specific relation type in each sentence. Thus, the task-specific annotation reading lead to shorter passes because the goal was merely to recognize a relation in the text, but not necessarily to process the every word in each sentence. This distinct reading behavior is shown in Figure FIGREF15, where fixations occur until the end of the sentence during normal reading, while during task-specific reading the fixations stop after the decisive words to detect a given relation type. Finally, we also analyzed the average reading times for each of the extracted eye-tracking features. The means and distributions for both tasks are shown in Figure FIGREF21. These results are in line with the recorded data in ZuCo 1.0, as well as with the features extracted in the GECO corpus BIBREF4. Data Validation ::: EEG As a first validation step, we extracted fixation-related potentials (FRPs), where the EEG signal during all fixations of one task are averaged. Figure FIGREF24 shows the time-series of the resulting FRPs for two electrodes (PO8 and Cz), as well as topographies of the voltage distributions across the scalp at selected points in time. The five components (for which the scalp topographies are plotted) are highly similar in the time-course of the chosen electrodes to dimigen2011coregistration as well as to ZuCo 1.0. Moreover, these previous studies were able to show an effect of fixation duration on the resulting FRPs. To show this dependency we followed two approaches. First, for each reading task, all single-trial FRPs were ordered by fixation duration and a vertical sliding time-window was used to smooth the data BIBREF13. Figure FIGREF25 (bottom) shows the resulting plots. In line with this previous work, a first positivation can be identified at 100 ms post-fixation onset. A second positive peak is located dependent on the duration of the fixation, which can be explained by the time-jittered succeeding fixation. The second approach is based on henderson2013co in which single trial EEG segments are clustered by the duration of the current fixation. As shown in Figure FIGREF25 (top), we chose four clusters and averaged the data within each cluster to four distinct FRPs, depending on the fixation duration. Again, the same positivation peaks become apparent. Both findings are consistent with the previous work mentioned and with our findings from ZuCo 1.0. Conclusion We presented a new, freely available corpus of eye movement and electrical brain activity recordings during natural reading as well as during annotation. This is the first dataset that allows for the comparison between these two reading paradigms. We described the materials and experiment design in detail and conducted an extensive validation to ensure the quality of the recorded data. Since this corpus is tailored to cognitively-inspired NLP, the applications and re-use potentials of this data are extensive. The provided word-level and sentence-level eye-tracking and EEG features can be used to improve and evaluate NLP and machine learning methods, for instance, to evaluate linguistic phenomena in neural models via psycholinguistic data. In addition, because the sentences contains semantic relation labels and the annotations of all participants, it can also be widely used for relation extraction and classification. Finally, the two carefully constructed reading paradigms allow for the comparison between normal reading and reading during annotation, which can be relevant to improve the manual labelling process as well as the quality of the annotations for supervised machine learning. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Did they experiment with this new dataset? Only give me the answer and do not output any other words.
Answer:
[ "No" ]
276
4,566
single_doc_qa
bbd0fc335be54c5aa13ad1245e05d705
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction A hashtag is a keyphrase represented as a sequence of alphanumeric characters plus underscore, preceded by the # symbol. Hashtags play a central role in online communication by providing a tool to categorize the millions of posts generated daily on Twitter, Instagram, etc. They are useful in search, tracking content about a certain topic BIBREF0 , BIBREF1 , or discovering emerging trends BIBREF2 . Hashtags often carry very important information, such as emotion BIBREF3 , sentiment BIBREF4 , sarcasm BIBREF5 , and named entities BIBREF6 , BIBREF7 . However, inferring the semantics of hashtags is non-trivial since many hashtags contain multiple tokens joined together, which frequently leads to multiple potential interpretations (e.g., lion head vs. lionhead). Table TABREF3 shows several examples of single- and multi-token hashtags. While most hashtags represent a mix of standard tokens, named entities and event names are prevalent and pose challenges to both human and automatic comprehension, as these are more likely to be rare tokens. Hashtags also tend to be shorter to allow fast typing, to attract attention or to satisfy length limitations imposed by some social media platforms. Thus, they tend to contain a large number of abbreviations or non-standard spelling variations (e.g., #iloveu4eva) BIBREF8 , BIBREF9 , which hinders their understanding. The goal of our study is to build efficient methods for automatically splitting a hashtag into a meaningful word sequence. Our contributions are: Our new dataset includes segmentation for 12,594 unique hashtags and their associated tweets annotated in a multi-step process for higher quality than the previous dataset of 1,108 hashtags BIBREF10 . We frame the segmentation task as a pairwise ranking problem, given a set of candidate segmentations. We build several neural architectures using this problem formulation which use corpus-based, linguistic and thesaurus based features. We further propose a multi-task learning approach which jointly learns segment ranking and single- vs. multi-token hashtag classification. The latter leads to an error reduction of 24.6% over the current state-of-the-art. Finally, we demonstrate the utility of our method by using hashtag segmentation in the downstream task of sentiment analysis. Feeding the automatically segmented hashtags to a state-of-the-art sentiment analysis method on the SemEval 2017 benchmark dataset results in a 2.6% increase in the official metric for the task. Background and Preliminaries Current approaches for hashtag segmentation can be broadly divided into three categories: (a) gazeteer and rule based BIBREF11 , BIBREF12 , BIBREF13 , (b) word boundary detection BIBREF14 , BIBREF15 , and (c) ranking with language model and other features BIBREF16 , BIBREF10 , BIBREF0 , BIBREF17 , BIBREF18 . Hashtag segmentation approaches draw upon work on compound splitting for languages such as German or Finnish BIBREF19 and word segmentation BIBREF20 for languages with no spaces between words such as Chinese BIBREF21 , BIBREF22 . Similar to our work, BIBREF10 BansalBV15 extract an initial set of candidate segmentations using a sliding window, then rerank them using a linear regression model trained on lexical, bigram and other corpus-based features. The current state-of-the-art approach BIBREF14 , BIBREF15 uses maximum entropy and CRF models with a combination of language model and hand-crafted features to predict if each character in the hashtag is the beginning of a new word. Generating Candidate Segmentations. Microsoft Word Breaker BIBREF16 is, among the existing methods, a strong baseline for hashtag segmentation, as reported in BIBREF14 and BIBREF10 . It employs a beam search algorithm to extract INLINEFORM0 best segmentations as ranked by the n-gram language model probability: INLINEFORM1 where INLINEFORM0 is the word sequence of segmentation INLINEFORM1 and INLINEFORM2 is the window size. More sophisticated ranking strategies, such as Binomial and word length distribution based ranking, did not lead to a further improvement in performance BIBREF16 . The original Word Breaker was designed for segmenting URLs using language models trained on web data. In this paper, we reimplemented and tailored this approach to segmenting hashtags by using a language model specifically trained on Twitter data (implementation details in § SECREF26 ). The performance of this method itself is competitive with state-of-the-art methods (evaluation results in § SECREF46 ). Our proposed pairwise ranking method will effectively take the top INLINEFORM3 segmentations generated by this baseline as candidates for reranking. However, in prior work, the ranking scores of each segmentation were calculated independently, ignoring the relative order among the top INLINEFORM0 candidate segmentations. To address this limitation, we utilize a pairwise ranking strategy for the first time for this task and propose neural architectures to model this. Multi-task Pairwise Neural Ranking We propose a multi-task pairwise neural ranking approach to better incorporate and distinguish the relative order between the candidate segmentations of a given hashtag. Our model adapts to address single- and multi-token hashtags differently via a multi-task learning strategy without requiring additional annotations. In this section, we describe the task setup and three variants of pairwise neural ranking models (Figure FIGREF11 ). Segmentation as Pairwise Ranking The goal of hashtag segmentation is to divide a given hashtag INLINEFORM0 into a sequence of meaningful words INLINEFORM1 . For a hashtag of INLINEFORM2 characters, there are a total of INLINEFORM3 possible segmentations but only one, or occasionally two, of them ( INLINEFORM4 ) are considered correct (Table TABREF9 ). We transform this task into a pairwise ranking problem: given INLINEFORM0 candidate segmentations { INLINEFORM1 }, we rank them by comparing each with the rest in a pairwise manner. More specifically, we train a model to predict a real number INLINEFORM2 for any two candidate segmentations INLINEFORM3 and INLINEFORM4 of hashtag INLINEFORM5 , which indicates INLINEFORM6 is a better segmentation than INLINEFORM7 if positive, and vice versa. To quantify the quality of a segmentation in training, we define a gold scoring function INLINEFORM8 based on the similarities with the ground-truth segmentation INLINEFORM9 : INLINEFORM10 We use the Levenshtein distance (minimum number of single-character edits) in this paper, although it is possible to use other similarity measurements as alternatives. We use the top INLINEFORM0 segmentations generated by Microsoft Word Breaker (§ SECREF2 ) as initial candidates. Pairwise Neural Ranking Model For an input candidate segmentation pair INLINEFORM0 , we concatenate their feature vectors INLINEFORM1 and INLINEFORM2 , and feed them into a feedforward network which emits a comparison score INLINEFORM3 . The feature vector INLINEFORM4 or INLINEFORM5 consists of language model probabilities using Good-Turing BIBREF23 and modified Kneser-Ney smoothing BIBREF24 , BIBREF25 , lexical and linguistic features (more details in § SECREF23 ). For training, we use all the possible pairs INLINEFORM6 of the INLINEFORM7 candidates as the input and their gold scores INLINEFORM8 as the target. The training objective is to minimize the Mean Squared Error (MSE): DISPLAYFORM0 where INLINEFORM0 is the number of training examples. To aggregate the pairwise comparisons, we follow a greedy algorithm proposed by BIBREF26 cohen1998learning and used for preference ranking BIBREF27 . For each segmentation INLINEFORM0 in the candidate set INLINEFORM1 , we calculate a single score INLINEFORM2 , and find the segmentation INLINEFORM3 corresponding to the highest score. We repeat the same procedure after removing INLINEFORM4 from INLINEFORM5 , and continue until INLINEFORM6 reduces to an empty set. Figure FIGREF11 (a) shows the architecture of this model. Margin Ranking (MR) Loss As an alternative to the pairwise ranker (§ SECREF15 ), we propose a pairwise model which learns from candidate pairs INLINEFORM0 but ranks each individual candidate directly rather than relatively. We define a new scoring function INLINEFORM1 which assigns a higher score to the better candidate, i.e., INLINEFORM2 , if INLINEFORM3 is a better candidate than INLINEFORM4 and vice-versa. Instead of concatenating the features vectors INLINEFORM5 and INLINEFORM6 , we feed them separately into two identical feedforward networks with shared parameters. During testing, we use only one of the networks to rank the candidates based on the INLINEFORM7 scores. For training, we add a ranking layer on top of the networks to measure the violations in the ranking order and minimize the Margin Ranking Loss (MR): DISPLAYFORM0 where INLINEFORM0 is the number of training samples. The architecture of this model is presented in Figure FIGREF11 (b). Adaptive Multi-task Learning Both models in § SECREF15 and § SECREF17 treat all the hashtags uniformly. However, different features address different types of hashtags. By design, the linguistic features capture named entities and multi-word hashtags that exhibit word shape patterns, such as camel case. The ngram probabilities with Good-Turing smoothing gravitate towards multi-word segmentations with known words, as its estimate for unseen ngrams depends on the fraction of ngrams seen once which can be very low BIBREF28 . The modified Kneser-Ney smoothing is more likely to favor segmentations that contain rare words, and single-word segmentations in particular. Please refer to § SECREF46 for a more detailed quantitative and qualitative analysis. To leverage this intuition, we introduce a binary classification task to help the model differentiate single-word from multi-word hashtags. The binary classifier takes hashtag features INLINEFORM0 as the input and outputs INLINEFORM1 , which represents the probability of INLINEFORM2 being a multi-word hashtag. INLINEFORM3 is used as an adaptive gating value in our multi-task learning setup. The gold labels for this task are obtained at no extra cost by simply verifying whether the ground-truth segmentation has multiple words. We train the pairwise segmentation ranker and the binary single- vs. multi-token hashtag classifier jointly, by minimizing INLINEFORM4 for the pairwise ranker and the Binary Cross Entropy Error ( INLINEFORM5 ) for the classifier: DISPLAYFORM0 where INLINEFORM0 is the adaptive gating value, INLINEFORM1 indicates if INLINEFORM2 is actually a multi-word hashtag and INLINEFORM3 is the number of training examples. INLINEFORM4 and INLINEFORM5 are the weights for each loss. For our experiments, we apply equal weights. More specifically, we divide the segmentation feature vector INLINEFORM0 into two subsets: (a) INLINEFORM1 with modified Kneser-Ney smoothing features, and (b) INLINEFORM2 with Good-Turing smoothing and linguistic features. For an input candidate segmentation pair INLINEFORM3 , we construct two pairwise vectors INLINEFORM4 and INLINEFORM5 by concatenation, then combine them based on the adaptive gating value INLINEFORM6 before feeding them into the feedforward network INLINEFORM7 for pairwise ranking: DISPLAYFORM0 We use summation with padding, as we find this simple ensemble method achieves similar performance in our experiments as the more complex multi-column networks BIBREF29 . Figure FIGREF11 (c) shows the architecture of this model. An analogue multi-task formulation can also be used for the Margin Ranking loss as: DISPLAYFORM0 Features We use a combination of corpus-based and linguistic features to rank the segmentations. For a candidate segmentation INLINEFORM0 , its feature vector INLINEFORM1 includes the number of words in the candidate, the length of each word, the proportion of words in an English dictionary or Urban Dictionary BIBREF30 , ngram counts from Google Web 1TB corpus BIBREF31 , and ngram probabilities from trigram language models trained on the Gigaword corpus BIBREF32 and 1.1 billion English tweets from 2010, respectively. We train two language models on each corpus: one with Good-Turing smoothing using SRILM BIBREF33 and the other with modified Kneser-Ney smoothing using KenLM BIBREF34 . We also add boolean features, such as if the candidate is a named-entity present in the list of Wikipedia titles, and if the candidate segmentation INLINEFORM2 and its corresponding hashtag INLINEFORM3 satisfy certain word-shapes (more details in appendix SECREF61 ). Similarly, for hashtag INLINEFORM0 , we extract the feature vector INLINEFORM1 consisting of hashtag length, ngram count of the hashtag in Google 1TB corpus BIBREF31 , and boolean features indicating if the hashtag is in an English dictionary or Urban Dictionary, is a named-entity, is in camel case, ends with a number, and has all the letters as consonants. We also include features of the best-ranked candidate by the Word Breaker model. Implementation Details We use the PyTorch framework to implement our multi-task pairwise ranking model. The pairwise ranker consists of an input layer, three hidden layers with eight nodes in each layer and hyperbolic tangent ( INLINEFORM0 ) activation, and a single linear output node. The auxiliary classifier consists of an input layer, one hidden layer with eight nodes and one output node with sigmoid activation. We use the Adam algorithm BIBREF35 for optimization and apply a dropout of 0.5 to prevent overfitting. We set the learning rate to 0.01 and 0.05 for the pairwise ranker and auxiliary classifier respectively. For each experiment, we report results obtained after 100 epochs. For the baseline model used to extract the INLINEFORM0 initial candidates, we reimplementated the Word Breaker BIBREF16 as described in § SECREF2 and adapted it to use a language model trained on 1.1 billion tweets with Good-Turing smoothing using SRILM BIBREF33 to give a better performance in segmenting hashtags (§ SECREF46 ). For all our experiments, we set INLINEFORM1 . Hashtag Segmentation Data We use two datasets for experiments (Table TABREF29 ): (a) STAN INLINEFORM0 , created by BIBREF10 BansalBV15, which consists of 1,108 unique English hashtags from 1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset BIBREF36 along with their crowdsourced segmentations and our additional corrections; and (b) STAN INLINEFORM1 , our new expert curated dataset, which includes all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset. Experiments In this section, we present experimental results that compare our proposed method with the other state-of-the-art approaches on hashtag segmentation datasets. The next section will show experiments of applying hashtag segmentation to the popular task of sentiment analysis. Existing Methods We compare our pairwise neural ranker with the following baseline and state-of-the-art approaches: The original hashtag as a single token; A rule-based segmenter, which employs a set of word-shape rules with an English dictionary BIBREF13 ; A Viterbi model which uses word frequencies from a book corpus BIBREF0 ; The specially developed GATE Hashtag Tokenizer from the open source toolkit, which combines dictionaries and gazetteers in a Viterbi-like algorithm BIBREF11 ; A maximum entropy classifier (MaxEnt) trained on the STAN INLINEFORM0 training dataset. It predicts whether a space should be inserted at each position in the hashtag and is the current state-of-the-art BIBREF14 ; Our reimplementation of the Word Breaker algorithm which uses beam search and a Twitter ngram language model BIBREF16 ; A pairwise linear ranker which we implemented for comparison purposes with the same features as our neural model, but using perceptron as the underlying classifier BIBREF38 and minimizing the hinge loss between INLINEFORM0 and a scoring function similar to INLINEFORM1 . It is trained on the STAN INLINEFORM2 dataset. Evaluation Metrics We evaluate the performance by the top INLINEFORM0 ( INLINEFORM1 ) accuracy (A@1, A@2), average token-level F INLINEFORM2 score (F INLINEFORM3 @1), and mean reciprocal rank (MRR). In particular, the accuracy and MRR are calculated at the segmentation-level, which means that an output segmentation is considered correct if and only if it fully matches the human segmentation. Average token-level F INLINEFORM4 score accounts for partially correct segmentation in the multi-token hashtag cases. Results Tables TABREF32 and TABREF33 show the results on the STAN INLINEFORM0 and STAN INLINEFORM1 datasets, respectively. All of our pairwise neural rankers are trained on the 2,518 manually segmented hashtags in the training set of STAN INLINEFORM2 and perform favorably against other state-of-the-art approaches. Our best model (MSE+multitask) that utilizes different features adaptively via a multi-task learning procedure is shown to perform better than simply combining all the features together (MR and MSE). We highlight the 24.6% error reduction on STAN INLINEFORM3 and 16.5% on STAN INLINEFORM4 of our approach over the previous SOTA BIBREF14 on the Multi-token hashtags, and the importance of having a separate evaluation of multi-word cases as it is trivial to obtain 100% accuracy for Single-token hashtags. While our hashtag segmentation model is achieving a very high accuracy@2, to be practically useful, it remains a challenge to get the top one predication exactly correct. Some hashtags are very difficult to interpret, e.g., #BTVbrownSMB refers to the Social Media Breakfast (SMB) in Burlington, Vermont (BTV). The improved Word Breaker with our addition of a Twitter-specific language model is a very strong baseline, which echos the findings of the original Word Breaker paper BIBREF16 that having a large in-domain language model is extremely helpful for word segmentation tasks. It is worth noting that the other state-of-the-art system BIBREF14 also utilized a 4-gram language model trained on 476 million tweets from 2009. Analysis and Discussion To empirically illustrate the effectiveness of different features on different types of hashtags, we show the results for models using individual feature sets in pairwise ranking models (MSE) in Table TABREF45 . Language models with modified Kneser-Ney smoothing perform best on single-token hashtags, while Good-Turing and Linguistic features work best on multi-token hashtags, confirming our intuition about their usefulness in a multi-task learning approach. Table TABREF47 shows a qualitative analysis with the first column ( INLINEFORM0 INLINEFORM1 INLINEFORM2 ) indicating which features lead to correct or wrong segmentations, their count in our data and illustrative examples with human segmentation. As expected, longer hashtags with more than three tokens pose greater challenges and the segmentation-level accuracy of our best model (MSE+multitask) drops to 82.1%. For many error cases, our model predicts a close-to-correct segmentation, e.g., #youbrownknowyoubrownupttoobrownearly, #iseebrownlondoniseebrownfrance, which is also reflected by the higher token-level F INLINEFORM0 scores across hashtags with different lengths (Figure FIGREF51 ). Since our approach heavily relies on building a Twitter language model, we experimented with its sizes and show the results in Figure FIGREF52 . Our approach can perform well even with access to a smaller amount of tweets. The drop in F INLINEFORM0 score for our pairwise neural ranker is only 1.4% and 3.9% when using the language models trained on 10% and 1% of the total 1.1 billion tweets, respectively. Language use in Twitter changes with time BIBREF9 . Our pairwise ranker uses language models trained on the tweets from the year 2010. We tested our approach on a set of 500 random English hashtags posted in tweets from the year 2019 and show the results in Table TABREF55 . With a segmentation-level accuracy of 94.6% and average token-level F INLINEFORM0 score of 95.6%, our approach performs favorably on 2019 hashtags. Extrinsic Evaluation: Twitter Sentiment Analysis We attempt to demonstrate the effectiveness of our hashtag segmentation system by studying its impact on the task of sentiment analysis in Twitter BIBREF39 , BIBREF40 , BIBREF41 . We use our best model (MSE+multitask), under the name HashtagMaster, in the following experiments. Experimental Setup We compare the performance of the BiLSTM+Lex BIBREF42 sentiment analysis model under three configurations: (a) tweets with hashtags removed, (b) tweets with hashtags as single tokens excluding the # symbol, and (c) tweets with hashtags as segmented by our system, HashtagMaster. BiLSTM+Lex is a state-of-the-art open source system for predicting tweet-level sentiment BIBREF43 . It learns a context-sensitive sentiment intensity score by leveraging a Twitter-based sentiment lexicon BIBREF44 . We use the same settings as described by BIBREF42 teng-vo-zhang:2016:EMNLP2016 to train the model. We use the dataset from the Sentiment Analysis in Twitter shared task (subtask A) at SemEval 2017 BIBREF41 . Given a tweet, the goal is to predict whether it expresses POSITIVE, NEGATIVE or NEUTRAL sentiment. The training and development sets consist of 49,669 tweets and we use 40,000 for training and the rest for development. There are a total of 12,284 tweets containing 12,128 hashtags in the SemEval 2017 test set, and our hashtag segmenter ended up splitting 6,975 of those hashtags present in 3,384 tweets. Results and Analysis In Table TABREF59 , we report the results based on the 3,384 tweets where HashtagMaster predicted a split, as for the rest of tweets in the test set, the hashtag segmenter would neither improve nor worsen the sentiment prediction. Our hashtag segmenter successfully improved the sentiment analysis performance by 2% on average recall and F INLINEFORM0 comparing to having hashtags unsegmented. This improvement is seemingly small but decidedly important for tweets where sentiment-related information is embedded in multi-word hashtags and sentiment prediction would be incorrect based only on the text (see Table TABREF60 for examples). In fact, 2,605 out of the 3,384 tweets have multi-word hashtags that contain words in the Twitter-based sentiment lexicon BIBREF44 and 125 tweets contain sentiment words only in the hashtags but not in the rest of the tweet. On the entire test set of 12,284 tweets, the increase in the average recall is 0.5%. Other Related Work Automatic hashtag segmentation can improve the performance of many applications besides sentiment analysis, such as text classification BIBREF13 , named entity linking BIBREF10 and modeling user interests for recommendations BIBREF45 . It can also help in collecting data of higher volume and quality by providing a more nuanced interpretation of its content, as shown for emotion analysis BIBREF46 , sarcasm and irony detection BIBREF11 , BIBREF47 . Better semantic analysis of hashtags can also potentially be applied to hashtag annotation BIBREF48 , to improve distant supervision labels in training classifiers for tasks such as sarcasm BIBREF5 , sentiment BIBREF4 , emotions BIBREF3 ; and, more generally, as labels for pre-training representations of words BIBREF49 , sentences BIBREF50 , and images BIBREF51 . Conclusion We proposed a new pairwise neural ranking model for hashtag segmention and showed significant performance improvements over the state-of-the-art. We also constructed a larger and more curated dataset for analyzing and benchmarking hashtag segmentation methods. We demonstrated that hashtag segmentation helps with downstream tasks such as sentiment analysis. Although we focused on English hashtags, our pairwise ranking approach is language-independent and we intend to extend our toolkit to languages other than English as future work. Acknowledgments We thank Ohio Supercomputer Center BIBREF52 for computing resources and the NVIDIA for providing GPU hardware. We thank Alan Ritter, Quanze Chen, Wang Ling, Pravar Mahajan, and Dushyanta Dhyani for valuable discussions. We also thank the annotators: Sarah Flanagan, Kaushik Mani, and Aswathnarayan Radhakrishnan. This material is based in part on research sponsored by the NSF under grants IIS-1822754 and IIS-1755898, DARPA through the ARO under agreement number W911NF-17-C-0095, through a Figure-Eight (CrowdFlower) AI for Everyone Award and a Criteo Faculty Research Award to Wei Xu. The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of the U.S. Government. Word-shape rules Our model uses the following word shape rules as boolean features. If the candidate segmentation INLINEFORM0 and its corresponding hashtag INLINEFORM1 satisfies a word shape rule, then the boolean feature is set to True. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How is the dataset of hashtags sourced? Only give me the answer and do not output any other words.
Answer:
[ "1,268 randomly selected tweets in the Stanford Sentiment Analysis Dataset BIBREF36, all 12,594 unique English hashtags and their associated tweets from the same Stanford dataset", "Stanford Sentiment Analysis Dataset BIBREF36" ]
276
5,209
single_doc_qa
144721a255a24ccf966738eaeba6810b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Model equations} \label{sec:equations} In drift-fluid models the continuity equation \begin{align} \frac{\partial n}{\partial t} + \nabla\cdot\left( n \vec u_E \right) &= 0 \label{eq:generala} \end{align} describes the dynamics of the electron density $n$. Here $\vec u_E := (\hat{\vec b} \times \nabla \phi)/B$ gives the electric drift velocity in a magnetic field $\vec B := B \hat{\vec b}$ and an electric potential $\phi$. We neglect contributions of the diamagnetic drift~\cite{Kube2016}. Equation~\eqref{eq:generala} is closed by invoking quasineutrality, i.e. the divergence of the ion polarization, the electron diamagnetic and the gravitational drift currents must vanish \begin{align} \nabla\cdot\left( \frac{n}{\Omega} \left( \frac{\partial}{\partial t} + \vec u_E \cdot\nabla \right)\frac{\nabla_\perp \phi}{B} + n\vec u_d - n\vec u_g\right) &=0 . \label{eq:generalb} \end{align} Here we denote $\nabla_\perp\phi/B := - \hat{\vec b} \times \vec u_E$, the electron diamagnetic drift $\vec u_d := - T_e(\hat{\vec b} \times\nabla n ) /enB$ with the electron temperature $T_e$, the ion gravitational drift velocity $\vec u_g := m_i \hat{\vec b} \times \vec g /B$ with ion mass $m_i$, and the ion gyro-frequency $\Omega := eB/m_i$. Combining Eq.~\eqref{eq:generalb} with Eq.~\eqref{eq:generala} yields \begin{align} \frac{\partial \rho}{\partial t} + \nabla\cdot\left( \rho\vec u_E \right) + \nabla \cdot\left( n(\vec u_\psi + \vec u_d + \vec u_g) \right) &= 0\label{eq:vorticity} \end{align} with the polarization charge density $\rho = \nabla\cdot( n\nabla_\perp \phi / \Omega B)$ and $\vec u_\psi := \hat{\vec b}\times \nabla\psi /B$ with $\psi:= m_i\vec u_E^2 /2e$. We exploit this form of Eq.~\eqref{eq:generalb} in our numerical simulations. Equations~\eqref{eq:generala} and \eqref{eq:generalb} respectively \eqref{eq:vorticity} have several invariants. First, in Eq.~\eqref{eq:generala} the relative particle number $M(t) := \int \mathrm{dA}\, (n-n_0)$ is conserved over time $\d M(t)/\d t = 0$. Furthermore, we integrate $( T_e(1+\ln n) -T_e \ln B)\partial_t n$ as well as $-e\phi \partial_t\rho - (m_i\vec u_E^2/2+gm_ix - T_e\ln B)\partial_t n$ over the domain to get, disregarding boundary contributions, \begin{align} \frac{\d}{\d t}\left[T_eS(t) + H(t) \right] = 0, \label{eq:energya}\\ \frac{\d}{\d t} \left[ E(t) - G(t) - H(t)\right] = 0, \label{eq:energyb} \end{align} where we define the entropy $S(t):=\int \mathrm{dA}\, [n\ln(n/n_0) - (n-n_0)]$, the kinetic energy $E(t):=m_i \int \mathrm{dA}\, n\vec u_E^2/2$ and the potential energies $G(t) := m_i g\int \mathrm{dA}\, x(n-n_0)$ and $H(t) := T_e\int \mathrm{dA}\, (n-n_0) \ln (B^{-1})$. Note that $n\ln( n/n_0) - n + n_0 \approx (n-n_0)^2/2$ for $|(n-n_0)/n_0| \ll 1$ and $S(t)$ thus reduces to the local entropy form in Reference~\cite{Kube2016}. We now set up a gravitational field $\vec g = g\hat x$ and a constant homogeneous background magnetic field $\vec B = B_0 \hat z$ in a Cartesian coordinate system. Then the divergences of the electric and gravitational drift velocities $\nabla\cdot\vec u_E$ and $\nabla\cdot\vec u_g$ and the diamagnetic current $\nabla\cdot(n\vec u_d)$ vanish, which makes the flow incompressible. Furthermore, the magnetic potential energy vanishes $H(t) = 0$. In a second system we model the inhomogeneous magnetic field present in tokamaks as $\vec B := B_0 (1+ x/R_0)^{-1}\hat z$ and neglect the gravitational drift $\vec u_g = 0$. Then, the potential energy $G(t) = 0$. Note that $H(t) = m_i \ensuremath{C_\mathrm{s}}^2/R_0\int\mathrm{dA}\, x(n-n_0) +\mathcal O(R_0^{-2}) $ reduces to $G(t)$ with the effective gravity $g_\text{eff}:= \ensuremath{C_\mathrm{s}}^2/R_0$ with $\ensuremath{C_\mathrm{s}}^2 := T_e/m_i$. For the rest of this letter we treat $g$ and $g_\text{eff}$ as well as $G(t)$ and $H(t)$ on the same footing. The magnetic field inhomogeneity thus entails compressible flows, which is the only difference to the model describing dynamics in a homogeneous magnetic field introduced above. Since both $S(t)\geq 0$ and $E(t)\geq 0$ we further derive from Eq.~\eqref{eq:energya} and Eq.~\eqref{eq:energyb} that the kinetic energy is bounded by $E(t) \leq T_eS(t) + E(t) = T_e S(0)$; a feature absent from the gravitational system with incompressible flows, where $S(t) = S(0)$. We now show that the invariants Eqs.~\eqref{eq:energya} and \eqref{eq:energyb} present restrictions on the velocity and acceleration of plasma blobs. First, we define the blobs' center of mass (COM) via $X(t):= \int\mathrm{dA}\, x(n-n_0)/M$ and its COM velocity as $V(t):=\d X(t)/\d t$. The latter is proportional to the total radial particle flux~\cite{Garcia_Bian_Fundamensky_POP_2006, Held2016a}. We assume that $n>n_0$ and $(n-n_0)^2/2 \leq [ n\ln (n/n_0) - (n-n_0)]n $ to show for both systems \begin{align} (MV)^2 &= \left( \int \mathrm{dA}\, n{\phi_y}/{B} \right)^2 = \left( \int \mathrm{dA}\, (n-n_0){\phi_y}/{B} \right)^2\nonumber\\ &\leq 2 \left( \int \mathrm{dA}\, \left[n\ln (n/n_0) -(n-n_0)\right]^{1/2}\sqrt{n}{\phi_y}/{B}\right)^2\nonumber\\ &\leq 4 S(0) E(t)/m_i \label{eq:inequality} \end{align} Here we use the Cauchy-Schwartz inequality and $\phi_y:=\partial\phi/\partial y$. Note that although we derive the inequality Eq.~\eqref{eq:inequality} only for amplitudes $\triangle n >0$ we assume that the results also hold for depletions. This is justified by our numerical results later in this letter. If we initialize our density field with a seeded blob of radius $\ell$ and amplitude $\triangle n$ as \begin{align} n(\vec x, 0) &= n_0 + \triangle n \exp\left( -\frac{\vec x^2}{2\ell^2} \right), \label{eq:inita} \end{align} and $\phi(\vec x, 0 ) = 0$, we immediately have $M := M(0) = 2\pi \ell^2 \triangle n$, $E(0) = G(0) = 0$ and $S(0) = 2\pi \ell^2 f(\triangle n)$, where $f(\triangle n)$ captures the amplitude dependence of the integral for $S(0)$. The acceleration for both incompressible and compressible flows can be estimated by assuming a linear acceleration $V=A_0t$ and $X=A_0t^2/2$~\cite{Held2016a} and using $E(t) = G(t) = m_igMX(t)$ in Eq.~\eqref{eq:inequality} \begin{align} \frac{A_0}{g} = \mathcal Q\frac{2S(0)}{M} \approx \frac{\mathcal Q}{2} \frac{\triangle n }{n_0+2\triangle n/9}. \label{eq:acceleration} \end{align} Here, we use the Pad\'e approximation of order $(1/1)$ of $2S(0)/M $ and define a model parameter $\mathcal Q$ with $0<\mathcal Q\leq1$ to be determined by numerical simulations. Note that the Pad\'e approximation is a better approximation than a simple truncated Taylor expansion especially for large relative amplitudes of order unity. Eq.~\eqref{eq:acceleration} predicts that $A_0/g\sim \triangle n/n_0$ for small amplitudes $|\triangle n/n_0| < 1$ and $A_0 \sim g $ for very large amplitudes $\triangle n /n_0 \gg 1$, which confirms the predictions in~\cite{Pecseli2016} and reproduces the limits discussed in~\cite{Angus2014}. As pointed out earlier for compressible flows Eq.~\eqref{eq:inequality} can be further estimated \begin{align} (MV)^2 \leq 4 T_eS(0)^2/m_i. \label{} \end{align} We therefore have a restriction on the maximum COM velocity for compressible flows, which is absent for incompressible flows \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = {\mathcal Q}\frac{2S(0)}{M} \approx \frac{\mathcal Q}{2} \frac{|\triangle n| }{n_0+2/9 \triangle n } \approx \frac{\mathcal Q}{2} \frac{|\triangle n|}{n_0}. \label{eq:linear} \end{align} For $|\triangle n /n_0|< 1$ Eq.~\eqref{eq:linear} reduces to the linear scaling derived in~\cite{Kube2016}. Finally, a scale analysis of Eq.~\eqref{eq:vorticity} shows that~\cite{Ott1978, Garcia2005, Held2016a} \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = \mathcal R \left( \frac{\ell}{R_0}\frac{|\triangle n|}{n_0} \right)^{1/2}. \label{eq:sqrt} \end{align} This equation predicts a square root dependence of the center of mass velocity on amplitude and size. We now propose a simple phenomenological model that captures the essential dynamics of blobs and depletions in the previously stated systems. More specifically the model reproduces the acceleration Eq.~\eqref{eq:acceleration} with and without Boussinesq approximation, the square root scaling for the COM velocity Eq.~\eqref{eq:sqrt} for incompressible flows as well as the relation between the square root scaling Eq.~\eqref{eq:sqrt} and the linear scaling Eq.~\eqref{eq:linear} for compressible flows. The basic idea is that the COM of blobs behaves like the one of an infinitely long plasma column immersed in an ambient plasma. The dynamics of this column reduces to the one of a two-dimensional ball. This idea is similar to the analytical ``top hat'' density solution for blob dynamics recently studied in~\cite{Pecseli2016}. The ball is subject to buoyancy as well as linear and nonlinear friction \begin{align} M_{\text{i}} \frac{d V}{d t} = (M_{\text{g}} - M_\text{p}) g - c_1 V - \mathrm{sgn}(V ) \frac{1}{2}c_2 V^2. \label{eq:ball} \end{align} The gravity $g$ has a positive sign in the coordinate system; sgn$(f)$ is the sign function. The first term on the right hand side is the buoyancy, where $M_{\text{g}} := \pi \ell^2 (n_0 + \mathcal Q \triangle n/2)$ is the gravitational mass of the ball with radius $\ell$ and $M_\mathrm{p} := n_0 \pi \ell^2 $ is the mass of the displaced ambient plasma. Note that if $\triangle n<0$ the ball represents a depletion and the buoyancy term has a negative sign, i.e. the depletion will rise. We introduce an inertial mass $M_{\text{i}} := \pi\ell^2 (n_0 +2\triangle n/9)$ different from the gravitational mass $M_{\text{g}}$ in order to recover the initial acceleration in Eq.~\eqref{eq:acceleration}. We interpret the parameters $\mathcal Q$ and $2/9$ as geometrical factors that capture the difference of the actual blob form from the idealized ``top hat'' solution. Also note that the Boussinesq approximation appears in the model as a neglect of inertia, $M_{\text{i}} = \pi\ell^2n_0$. The second term is the linear friction term with coefficient $c_1(\ell)$, which depends on the size of the ball. If we disregard the nonlinear friction, $c_2=0$, Eq.~\eqref{eq:ball} directly yields a maximum velocity $c_1V^*=\pi \ell^2 n g \mathcal Q\triangle n/2$. From our previous considerations $\max V/\ensuremath{C_\mathrm{s}}=\mathcal Q \triangle n /2n_0$, we thus identify \begin{align} c_1 = \pi\ell^2 n_0 g/\ensuremath{C_\mathrm{s}}. \label{} \end{align} The linear friction coefficient thus depends on the gravity and the size of the ball. The last term in \eqref{eq:ball} is the nonlinear friction. The sign of the force depends on whether the ball rises or falls in the ambient plasma. If we disregard linear friction $c_1=0$, we have the maximum velocity $V^*= \sigma(\triangle n)\sqrt{\pi \ell^2|\triangle n| g\mathcal Q/c_2}$, which must equal $\max V= \sigma(\triangle n) \mathcal R \sqrt{g \ell |\triangle n/n_0|}$ and thus \begin{align} c_2 = {\mathcal Q\pi n_0\ell }/{\mathcal R^2}. \label{} \end{align} Inserting $c_1$ and $c_2$ into Eq.~\eqref{eq:ball} we can derive the maximum absolute velocity in the form \begin{align} \frac{\max |V|}{\ensuremath{C_\mathrm{s}}} = \left(\frac{\mathcal R^2}{\mathcal Q}\right) \frac{\ell}{R_0} \left( \left({1+\left( \frac{\mathcal Q}{\mathcal R} \right)^{2} \frac{|\triangle n|/n_0 }{\ell/R_0}}\right)^{1/2}-1 \right) \label{eq:vmax_theo} \end{align} and thus have a concise expression for $\max |V|$ that captures both the linear scaling \eqref{eq:linear} as well as the square root scaling \eqref{eq:sqrt}. With Eq.~\eqref{eq:acceleration} and Eq.~\eqref{eq:sqrt} respectively Eq.~\eqref{eq:vmax_theo} we finally arrive at an analytical expression for the time at which the maximum velocity is reached via $t_{\max V} \sim \max V/A_0$. Its inverse $\gamma:=t_{\max V}^{-1}$ gives the global interchange growth rate, for which an empirical expression was presented in Reference~\cite{Held2016a}. We use the open source library FELTOR to simulate Eqs.~\eqref{eq:generala} and \eqref{eq:vorticity} with and without drift compression. For numerical stabilty we added small diffusive terms on the right hand sides of the equations. The discontinuous Galerkin methods employ three polynomial coefficients and a minimum of $N_x=N_y=768$ grid cells. The box size is $50\ell$ in order to mitigate influences of the finite box size on the blob dynamics. Moreover, we used the invariants in Eqs. \eqref{eq:energya} and \eqref{eq:energyb} as consistency tests to verify the code and repeated simulations also in a gyrofluid model. No differences to the results presented here were found. Initial perturbations on the particle density field are given by Eq.~\eqref{eq:inita}, where the perturbation amplitude $\triangle n/n_0$ was chosen between $10^{-3}$ and $20$ for blobs and $-10^0$ and $ -10^{-3}$ for depletions. Due to computational reasons we show results only for $\triangle n/n_0\leq 20$. For compressible flows we consider two different cases $\ell/R_0 = 10^{-2}$ and $\ell /R_0 = 10^{-3}$. For incompressible flows Eq.~\eqref{eq:generala} and \eqref{eq:vorticity} can be normalized such that the blob radius is absent from the equations~\cite{Ott1978, Kube2012}. The simulations of incompressible flows can thus be used for both sizes. The numerical code as well as input parameters and output data can be found in the supplemental dataset to this contribution~\cite{Data2017}. \begin{figure}[htb] \includegraphics[width=\columnwidth]{com_blobs} \caption{ The maximum radial COM velocities of blobs for compressible and incompressible flows are shown. The continuous lines show Eq.~\eqref{eq:vmax_theo} while the dashed line shows the square root scaling Eq.~\eqref{eq:sqrt} with $\mathcal Q = 0.32$ and $\mathcal R=0.85$. } \label{fig:com_blobs} \end{figure} In Fig.~\ref{fig:com_blobs} we plot the maximum COM velocity for blobs with and without drift compression. For incompressible flows blobs follow the square root scaling almost perfectly. Only at very large amplitudes velocities are slightly below the predicted values. For small amplitudes we observe that the compressible blobs follow a linear scaling. When the amplitudes increase there is a transition to the square root scaling at around $\triangle n/n_0 \simeq 0.5$ for $\ell/R_0=10^{-2}$ and $\triangle n/n_0 \simeq 0.05$ for $\ell/R_0=10^{-3}$, which is consistent with Eq.~\eqref{eq:vmax_theo} and Reference~\cite{Kube2016}. In the transition regions the simulated velocities are slightly larger than the predicted ones from Eq.~\eqref{eq:vmax_theo}. Beyond these amplitudes the velocities of compressible and incompressible blobs align. \begin{figure}[htb] \includegraphics[width=\columnwidth]{com_holes} \caption{ The maximum radial COM velocities of depletions for compressible and incompressible flows are shown. The continuous lines show Eq.~\eqref{eq:vmax_theo} while the dashed line shows the square root scaling Eq.~\eqref{eq:sqrt} with $\mathcal Q = 0.32$ and $\mathcal R=0.85$. Note that small amplitudes are on the right and amplitudes close to unity are on the left side. } \label{fig:com_depletions} \end{figure} In Fig.~\ref{fig:com_depletions} we show the maximum radial COM velocity for depletions instead of blobs. For relative amplitudes below $|\triangle n|/n_0 \simeq 0.5$ (right of unity in the plot) the velocities coincide with the corresponding blob velocities in Fig.~\ref{fig:com_blobs}. For amplitudes larger than $|\triangle n|/n_0\simeq 0.5$ the velocities follow the square root scaling. We observe that for plasma depletions beyond $90$ percent the velocities in both systems reach a constant value that is very well predicted by the square root scaling. \begin{figure}[htb] \includegraphics[width=\columnwidth]{acc_blobs} \caption{ Average acceleration of blobs for compressible and incompressible flows are shown. The continuous line shows the acceleration in Eq.~\eqref{eq:acceleration} with $\mathcal Q=0.32$ while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. } \label{fig:acc_blobs} \end{figure} In Fig.~\ref{fig:acc_blobs} we show the average acceleration of blobs for compressible and incompressible flows computed by dividing the maximum velocity $\max V$ by the time to reach this velocity $t_{\max V}$. We compare the simulation results to the theoretical predictions Eq.~\eqref{eq:acceleration} of our model with and without inertia. The results of the compressible and incompressible systems coincide and fit very well to our theoretical values. For amplitudes larger than unity the acceleration deviates significantly from the prediction with Boussinesq approximation. \begin{figure}[htb] \includegraphics[width=\columnwidth]{acc_holes} \caption{ Average acceleration of depletions for compressible and incompressible flows are shown. The continuous line shows the acceleration in Eq.~\eqref{eq:acceleration} with $\mathcal Q=0.32$ while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. } \label{fig:acc_depletions} \end{figure} In Fig.~\ref{fig:acc_depletions} we show the simulated acceleration of depletions in the compressible and the incompressible systems. We compare the simulation results to the theoretical predictions Eq.~\eqref{eq:acceleration} of our model with and without inertia. Deviations from our theoretical prediction Eq.~\eqref{eq:acceleration} are visible for amplitudes smaller than $\triangle n/n_0 \simeq -0.5$ (left of unity in the plot). The relative deviations are small at around $20$ percent. As in Fig.~\ref{fig:com_depletions} the acceleration reaches a constant values for plasma depletions of more than $90$ percent. Comparing Fig.~\ref{fig:acc_depletions} to Fig.~\ref{fig:acc_blobs} the asymmetry between blobs and depletions becomes apparent. While the acceleration of blobs is reduced for large amplitudes compared to a linear dependence the acceleration of depletions is increased. In the language of our simple buoyancy model the inertia of depletions is reduced but increased for blobs. In conclusion we discuss the dynamics of seeded blobs and depletions in a compressible and an incompressible system. With only two fit parameters our theoretical results reproduce the numerical COM velocities and accelerations over five orders of magnitude. We derive the amplitude dependence of the acceleration of blobs and depletions from the conservation laws of our systems in Eq.~\eqref{eq:acceleration}. From the same inequality a linear regime is derived in the compressible system for ratios of amplitudes to sizes smaller than a critical value. In this regime the blob and depletion velocity depends linearly on the initial amplitude and is independent of size. The regime is absent from the system with incompressible flows. Our theoretical results are verified by numerical simulations for all amplitudes that are relevant in magnetic fusion devices. Finally, we suggest a new empirical blob model that captures the detailed dynamics of more complicated models. The Boussinesq approximation is clarified as the absence of inertia and a thus altered acceleration of blobs and depletions. The maximum blob velocity is not altered by the Boussinesq approximation. The authors were supported with financial subvention from the Research Council of Norway under grant 240510/F20. M.W. and M.H. were supported by the Austrian Science Fund (FWF) Y398. The computational results presented have been achieved in part using the Vienna Scientific Cluster (VSC). Part of this work was performed on the Abel Cluster, owned by the University of Oslo and the Norwegian metacenter for High Performance Computing (NOTUR), and operated by the Department for Research Computing at USIT, the University of Oslo IT-department. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the relationship between the maximum velocity and the amplitude of the blob or depletion? Only give me the answer and do not output any other words.
Answer:
[ "The maximum velocity scales with the square root of the amplitude." ]
276
6,240
single_doc_qa
185a96cbe4d74b8586e035a795830f8e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction We posses a wealth of prior knowledge about many natural language processing tasks. For example, in text categorization, we know that words such as NBA, player, and basketball are strong indicators of the sports category BIBREF0 , and words like terrible, boring, and messing indicate a negative polarity while words like perfect, exciting, and moving suggest a positive polarity in sentiment classification. A key problem arisen here, is how to leverage such knowledge to guide the learning process, an interesting problem for both NLP and machine learning communities. Previous studies addressing the problem fall into several lines. First, to leverage prior knowledge to label data BIBREF1 , BIBREF2 . Second, to encode prior knowledge with a prior on parameters, which can be commonly seen in many Bayesian approaches BIBREF3 , BIBREF4 . Third, to formalise prior knowledge with additional variables and dependencies BIBREF5 . Last, to use prior knowledge to control the distributions over latent output variables BIBREF6 , BIBREF7 , BIBREF8 , which makes the output variables easily interpretable. However, a crucial problem, which has rarely been addressed, is the bias in the prior knowledge that we supply to the learning model. Would the model be robust or sensitive to the prior knowledge? Or, which kind of knowledge is appropriate for the task? Let's see an example: we may be a baseball fan but unfamiliar with hockey so that we can provide a few number of feature words of baseball, but much less of hockey for a baseball-hockey classification task. Such prior knowledge may mislead the model with heavy bias to baseball. If the model cannot handle this situation appropriately, the performance may be undesirable. In this paper, we investigate into the problem in the framework of Generalized Expectation Criteria BIBREF7 . The study aims to reveal the factors of reducing the sensibility of the prior knowledge and therefore to make the model more robust and practical. To this end, we introduce auxiliary regularization terms in which our prior knowledge is formalized as distribution over output variables. Recall the example just mentioned, though we do not have enough knowledge to provide features for class hockey, it is easy for us to provide some neutral words, namely words that are not strong indicators of any class, like player here. As one of the factors revealed in this paper, supplying neutral feature words can boost the performance remarkably, making the model more robust. More attractively, we do not need manual annotation to label these neutral feature words in our proposed approach. More specifically, we explore three regularization terms to address the problem: (1) a regularization term associated with neutral features; (2) the maximum entropy of class distribution regularization term; and (3) the KL divergence between reference and predicted class distribution. For the first manner, we simply use the most common features as neutral features and assume the neutral features are distributed uniformly over class labels. For the second and third one, we assume we have some knowledge about the class distribution which will be detailed soon later. To summarize, the main contributions of this work are as follows: The rest of the paper is structured as follows: In Section 2, we briefly describe the generalized expectation criteria and present the proposed regularization terms. In Section 3, we conduct extensive experiments to justify the proposed methods. We survey related work in Section 4, and summarize our work in Section 5. Method We address the robustness problem on top of GE-FL BIBREF0 , a GE method which leverages labeled features as prior knowledge. A labeled feature is a strong indicator of a specific class and is manually provided to the classifier. For example, words like amazing, exciting can be labeled features for class positive in sentiment classification. Generalized Expectation Criteria Generalized expectation (GE) criteria BIBREF7 provides us a natural way to directly constrain the model in the preferred direction. For example, when we know the proportion of each class of the dataset in a classification task, we can guide the model to predict out a pre-specified class distribution. Formally, in a parameter estimation objective function, a GE term expresses preferences on the value of some constraint functions about the model's expectation. Given a constraint function $G({\rm x}, y)$ , a conditional model distribution $p_\theta (y|\rm x)$ , an empirical distribution $\tilde{p}({\rm x})$ over input samples and a score function $S$ , a GE term can be expressed as follows: $$S(E_{\tilde{p}({\rm x})}[E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]])$$ (Eq. 4) Learning from Labeled Features Druck et al. ge-fl proposed GE-FL to learn from labeled features using generalized expectation criteria. When given a set of labeled features $K$ , the reference distribution over classes of these features is denoted by $\hat{p}(y| x_k), k \in K$ . GE-FL introduces the divergence between this reference distribution and the model predicted distribution $p_\theta (y | x_k)$ , as a term of the objective function: $$\mathcal {O} = \sum _{k \in K} KL(\hat{p}(y|x_k) || p_\theta (y | x_k)) + \sum _{y,i} \frac{\theta _{yi}^2}{2 \sigma ^2}$$ (Eq. 6) where $\theta _{yi}$ is the model parameter which indicates the importance of word $i$ to class $y$ . The predicted distribution $p_\theta (y | x_k)$ can be expressed as follows: $ p_\theta (y | x_k) = \frac{1}{C_k} \sum _{\rm x} p_\theta (y|{\rm x})I(x_k) $ in which $I(x_k)$ is 1 if feature $k$ occurs in instance ${\rm x}$ and 0 otherwise, $C_k = \sum _{\rm x} I(x_k)$ is the number of instances with a non-zero value of feature $k$ , and $p_\theta (y|{\rm x})$ takes a softmax form as follows: $ p_\theta (y|{\rm x}) = \frac{1}{Z(\rm x)}\exp (\sum _i \theta _{yi}x_i). $ To solve the optimization problem, L-BFGS can be used for parameter estimation. In the framework of GE, this term can be obtained by setting the constraint function $G({\rm x}, y) = \frac{1}{C_k} \vec{I} (y)I(x_k)$ , where $\vec{I}(y)$ is an indicator vector with 1 at the index corresponding to label $y$ and 0 elsewhere. Regularization Terms GE-FL reduces the heavy load of instance annotation and performs well when we provide prior knowledge with no bias. In our experiments, we observe that comparable numbers of labeled features for each class have to be supplied. But as mentioned before, it is often the case that we are not able to provide enough knowledge for some of the classes. For the baseball-hockey classification task, as shown before, GE-FL will predict most of the instances as baseball. In this section, we will show three terms to make the model more robust. Neutral features are features that are not informative indicator of any classes, for instance, word player to the baseball-hockey classification task. Such features are usually frequent words across all categories. When we set the preference distribution of the neutral features to be uniform distributed, these neutral features will prevent the model from biasing to the class that has a dominate number of labeled features. Formally, given a set of neutral features $K^{^{\prime }}$ , the uniform distribution is $\hat{p}_u(y|x_k) = \frac{1}{|C|}, k \in K^{^{\prime }}$ , where $|C|$ is the number of classes. The objective function with the new term becomes $$\mathcal {O}_{NE} = \mathcal {O} + \sum _{k \in K^{^{\prime }}} KL(\hat{p}_u(y|x_k) || p_\theta (y | x_k)).$$ (Eq. 9) Note that we do not need manual annotation to provide neutral features. One simple way is to take the most common features as neutral features. Experimental results show that this strategy works successfully. Another way to prevent the model from drifting from the desired direction is to constrain the predicted class distribution on unlabeled data. When lacking knowledge about the class distribution of the data, one feasible way is to take maximum entropy principle, as below: $$\mathcal {O}_{ME} = \mathcal {O} + \lambda \sum _{y} p(y) \log p(y)$$ (Eq. 11) where $p(y)$ is the predicted class distribution, given by $ p(y) = \frac{1}{|X|} \sum _{\rm x} p_\theta (y | \rm x). $ To control the influence of this term on the overall objective function, we can tune $\lambda $ according to the difference in the number of labeled features of each class. In this paper, we simply set $\lambda $ to be proportional to the total number of labeled features, say $\lambda = \beta |K|$ . This maximum entropy term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ . Therefore, $E_{p_\theta (y|{\rm x})}[G({\rm x}, y)]$ is just the model distribution $p_\theta (y|{\rm x})$ and its expectation with the empirical distribution $\tilde{p}(\rm x)$ is simply the average over input samples, namely $p(y)$ . When $S$ takes the maximum entropy form, we can derive the objective function as above. Sometimes, we have already had much knowledge about the corpus, and can estimate the class distribution roughly without labeling instances. Therefore, we introduce the KL divergence between the predicted and reference class distributions into the objective function. Given the preference class distribution $\hat{p}(y)$ , we modify the objective function as follows: $$\mathcal {O}_{KL} &= \mathcal {O} + \lambda KL(\hat{p}(y) || p(y))$$ (Eq. 13) Similarly, we set $\lambda = \beta |K|$ . This divergence term can be derived by setting the constraint function to $G({\rm x}, y) = \vec{I}(y)$ and setting the score function to $S(\hat{p}, p) = \sum _i \hat{p}_i \log \frac{\hat{p}_i}{p_i}$ , where $p$ and $\hat{p}$ are distributions. Note that this regularization term involves the reference class distribution which will be discussed later. Experiments In this section, we first justify the approach when there exists unbalance in the number of labeled features or in class distribution. Then, to test the influence of $\lambda $ , we conduct some experiments with the method which incorporates the KL divergence of class distribution. Last, we evaluate our approaches in 9 commonly used text classification datasets. We set $\lambda = 5|K|$ by default in all experiments unless there is explicit declaration. The baseline we choose here is GE-FL BIBREF0 , a method based on generalization expectation criteria. Data Preparation We evaluate our methods on several commonly used datasets whose themes range from sentiment, web-page, science to medical and healthcare. We use bag-of-words feature and remove stopwords in the preprocess stage. Though we have labels of all documents, we do not use them during the learning process, instead, we use the label of features. The movie dataset, in which the task is to classify the movie reviews as positive or negtive, is used for testing the proposed approaches with unbalanced labeled features, unbalanced datasets or different $\lambda $ parameters. All unbalanced datasets are constructed based on the movie dataset by randomly removing documents of the positive class. For each experiment, we conduct 10-fold cross validation. As described in BIBREF0 , there are two ways to obtain labeled features. The first way is to use information gain. We first calculate the mutual information of all features according to the labels of the documents and select the top 20 as labeled features for each class as a feature pool. Note that using information gain requires the document label, but this is only to simulate how we human provide prior knowledge to the model. The second way is to use LDA BIBREF9 to select features. We use the same selection process as BIBREF0 , where they first train a LDA on the dataset, and then select the most probable features of each topic (sorted by $P(w_i|t_j)$ , the probability of word $w_i$ given topic $t_j$ ). Similar to BIBREF10 , BIBREF0 , we estimate the reference distribution of the labeled features using a heuristic strategy. If there are $|C|$ classes in total, and $n$ classes are associated with a feature $k$ , the probability that feature $k$ is related with any one of the $n$ classes is $\frac{0.9}{n}$ and with any other class is $\frac{0.1}{|C| - n}$ . Neutral features are the most frequent words after removing stop words, and their reference distributions are uniformly distributed. We use the top 10 frequent words as neutral features in all experiments. With Unbalanced Labeled Features In this section, we evaluate our approach when there is unbalanced knowledge on the categories to be classified. The labeled features are obtained through information gain. Two settings are chosen: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). As shown in Figure 1 , Maximum entropy principle shows improvement only on the balanced case. An obvious reason is that maximum entropy only favors uniform distribution. Incorporating Neutral features performs similarly to maximum entropy since we assume that neutral words are uniformly distributed. Its accuracy decreases slowly when the number of labeled features becomes larger ( $t>4$ ) (Figure 1 (a)), suggesting that the model gradually biases to the class with more labeled features, just like GE-FL. Incorporating the KL divergence of class distribution performs much better than GE-FL on both balanced and unbalanced datasets. This shows that it is effective to control the unbalance in labeled features and in the dataset. With Balanced Labeled Features We also compare with the baseline when the labeled features are balanced. Similar to the experiment above, the labeled features are obtained by information gain. Two settings are experimented with: (a) We randomly select $t \in [1, 20]$ features from the feature pool for each class, and conduct comparisons on the original balanced movie dataset (positive:negtive=1:1). (b) Similar to (a), but the class distribution is unbalanced, by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 2 . When the dataset is balanced (Figure 2 (a)), there is little difference between GE-FL and our methods. The reason is that the proposed regularization terms provide no additional knowledge to the model and there is no bias in the labeled features. On the unbalanced dataset (Figure 2 (b)), incorporating KL divergence is much better than GE-FL since we provide additional knowledge(the true class distribution), but maximum entropy and neutral features are much worse because forcing the model to approach the uniform distribution misleads it. With Unbalanced Class Distributions Our methods are also evaluated on datasets with different unbalanced class distributions. We manually construct several movie datasets with class distributions of 1:2, 1:3, 1:4 by randomly removing 50%, 67%, 75% positive documents. The original balanced movie dataset is used as a control group. We test with both balanced and unbalanced labeled features. For the balanced case, we randomly select 10 features from the feature pool for each class, and for the unbalanced case, we select 10 features for one class, and 1 feature for the other. Results are shown in Figure 3 . Figure 3 (a) shows that when the dataset and the labeled features are both balanced, there is little difference between our methods and GE-FL(also see Figure 2 (a)). But when the class distribution becomes more unbalanced, the difference becomes more remarkable. Performance of neutral features and maximum entropy decrease significantly but incorporating KL divergence increases remarkably. This suggests if we have more accurate knowledge about class distribution, KL divergence can guide the model to the right direction. Figure 3 (b) shows that when the labeled features are unbalanced, our methods significantly outperforms GE-FL. Incorporating KL divergence is robust enough to control unbalance both in the dataset and in labeled features while the other three methods are not so competitive. The Influence of λ\lambda We present the influence of $\lambda $ on the method that incorporates KL divergence in this section. Since we simply set $\lambda = \beta |K|$ , we just tune $\beta $ here. Note that when $\beta = 0$ , the newly introduced regularization term is disappeared, and thus the model is actually GE-FL. Again, we test the method with different $\lambda $ in two settings: (a) We randomly select $t \in [1, 20]$ features from the feature pool for one class, and only one feature for the other class. The original balanced movie dataset is used (positive:negative=1:1). (b) Similar to (a), but the dataset is unbalanced, obtained by randomly removing 75% positive documents (positive:negative=1:4). Results are shown in Figure 4 . As expected, $\lambda $ reflects how strong the regularization is. The model tends to be closer to our preferences with the increasing of $\lambda $ on both cases. Using LDA Selected Features We compare our methods with GE-FL on all the 9 datasets in this section. Instead of using features obtained by information gain, we use LDA to select labeled features. Unlike information gain, LDA does not employ any instance labels to find labeled features. In this setting, we can build classification models without any instance annotation, but just with labeled features. Table 1 shows that our three methods significantly outperform GE-FL. Incorporating neutral features performs better than GE-FL on 7 of the 9 datasets, maximum entropy is better on 8 datasets, and KL divergence better on 7 datasets. LDA selects out the most predictive features as labeled features without considering the balance among classes. GE-FL does not exert any control on such an issue, so the performance is severely suffered. Our methods introduce auxiliary regularization terms to control such a bias problem and thus promote the model significantly. Related Work There have been much work that incorporate prior knowledge into learning, and two related lines are surveyed here. One is to use prior knowledge to label unlabeled instances and then apply a standard learning algorithm. The other is to constrain the model directly with prior knowledge. Liu et al.text manually labeled features which are highly predictive to unsupervised clustering assignments and use them to label unlabeled data. Chang et al.guiding proposed constraint driven learning. They first used constraints and the learned model to annotate unlabeled instances, and then updated the model with the newly labeled data. Daumé daume2008cross proposed a self training method in which several models are trained on the same dataset, and only unlabeled instances that satisfy the cross task knowledge constraints are used in the self training process. MaCallum et al.gec proposed generalized expectation(GE) criteria which formalised the knowledge as constraint terms about the expectation of the model into the objective function.Graça et al.pr proposed posterior regularization(PR) framework which projects the model's posterior onto a set of distributions that satisfy the auxiliary constraints. Druck et al.ge-fl explored constraints of labeled features in the framework of GE by forcing the model's predicted feature distribution to approach the reference distribution. Andrzejewski et al.andrzejewski2011framework proposed a framework in which general domain knowledge can be easily incorporated into LDA. Altendorf et al.altendorf2012learning explored monotonicity constraints to improve the accuracy while learning from sparse data. Chen et al.chen2013leveraging tried to learn comprehensible topic models by leveraging multi-domain knowledge. Mann and McCallum simple,generalized incorporated not only labeled features but also other knowledge like class distribution into the objective function of GE-FL. But they discussed only from the semi-supervised perspective and did not investigate into the robustness problem, unlike what we addressed in this paper. There are also some active learning methods trying to use prior knowledge. Raghavan et al.feedback proposed to use feedback on instances and features interlacedly, and demonstrated that feedback on features boosts the model much. Druck et al.active proposed an active learning method which solicits labels on features rather than on instances and then used GE-FL to train the model. Conclusion and Discussions This paper investigates into the problem of how to leverage prior knowledge robustly in learning models. We propose three regularization terms on top of generalized expectation criteria. As demonstrated by the experimental results, the performance can be considerably improved when taking into account these factors. Comparative results show that our proposed methods is more effective and works more robustly against baselines. To the best of our knowledge, this is the first work to address the robustness problem of leveraging knowledge, and may inspire other research. We then present more detailed discussions about the three regularization methods. Incorporating neutral features is the simplest way of regularization, which doesn't require any modification of GE-FL but just finding out some common features. But as Figure 1 (a) shows, only using neutral features are not strong enough to handle extremely unbalanced labeled features. The maximum entropy regularization term shows the strong ability of controlling unbalance. This method doesn't need any extra knowledge, and is thus suitable when we know nothing about the corpus. But this method assumes that the categories are uniformly distributed, which may not be the case in practice, and it will have a degraded performance if the assumption is violated (see Figure 1 (b), Figure 2 (b), Figure 3 (a)). The KL divergence performs much better on unbalanced corpora than other methods. The reason is that KL divergence utilizes the reference class distribution and doesn't make any assumptions. The fact suggests that additional knowledge does benefit the model. However, the KL divergence term requires providing the true class distribution. Sometimes, we may have the exact knowledge about the true distribution, but sometimes we may not. Fortunately, the model is insensitive to the true distribution and therefore a rough estimation of the true distribution is sufficient. In our experiments, when the true class distribution is 1:2, where the reference class distribution is set to 1:1.5/1:2/1:2.5, the accuracy is 0.755/0.756/0.760 respectively. This provides us the possibility to perform simple computing on the corpus to obtain the distribution in reality. Or, we can set the distribution roughly with domain expertise. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How do they define robustness of a model? Only give me the answer and do not output any other words.
Answer:
[ "ability to accurately classify texts even when the amount of prior knowledge for different classes is unbalanced, and when the class distribution of the dataset is unbalanced", "Low sensitivity to bias in prior knowledge" ]
276
4,943
single_doc_qa
8dfd18a9b0fb4c708c1d43bd83b29fcb
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Two-stage Pipeline for Multilingual Dialect Detection Publish Date: Unkown Author List: Ankit Vaidya (from Pune Institute of Computer Technology), Aditya Kane (from Pune Institute of Computer Technology) Figure Figure 1: Class distribution of dialects Figure 2: System diagram for dialect classification.The LID classifies the input into one of 3 languages.The sample is then further classified into dialects by language specific models. Figure 3: Confusion matrix of 9-way classification.Note that rows are normalized according to the number of samples is that class. Our complete results for Track-1 using the two-stage dialect detection pipeline.Model-* denotes the language of the models used for the experiments. Performance on Track-1 validation dataset of individual models used in the two-stage pipeline."Lg" stands for language of the model used. Comparative results of two-way classification using the finetuned (F.T.) predictions and predictions adapted from three-way classification models. abstract Dialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 DSL-TL shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively. Our proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly 1 . Introduction Language has been the primary mode of communication for humans since the pre-historic ages. Studies have explored the evolution of language and outlined mathematical models that govern the intricacies of natural language . Inevitably, as humans established civilization in various parts of the world, this language was modified by, and for the group of people occupied by that particular geographical region. This gave rise to multiple national dialects of the same language. The VarDial workshop (colocated with EACL 2023) explores various dialects and variations of the same language. We participated in the Discriminating Between Similar Languages -True Labels (DSL-TL) shared task. In this task, the participants were provided with data from three languages, with each language having three varieties. This shared task consisted of two tracks -Track-1 featuring nine-way classification and Track-2 featuring six-way classification. The second track included two particular national dialects of each language (eg. American English and British English), and the first track had one general We ranked 1 st in both of the tracks. Moreover, we beat the next best submission by a margin of 4.5% in the first task and 5.6% in the second task.We were the only team to surpass the organizer baseline scores. We present our winning solution in this paper. We used an end-to-end deep learning pipeline which consisted of a language identification model and three language-specific models, one for each language. We converged upon the best combination by doing an elaborate analysis of various models available. Furthermore, in this work we also analyze the performance of the pipeline as a whole and also provide an ablation study. Lastly, we provide some future directions in this area of research. Related Work The present literature encompasses various aspects of dialect identification. We study this from three perspectives: large language models, language identification and dialect classification problems. Large Language Models The success of transformers and BERT based models was inevitable since the initial boom of the transformer 2017) model. In recent years, many other architectures like RoBERTa and ELECTRA have further pushed the state-of-the-art in this domain. Moreover, autoregressive models like GPT and GPT-2 have also shown their prowess. Multilingual versions of RoBERTA, namely XLM-RoBERTa are also available. Lastly, language specific models like Spanish BERT (la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, 2022) and Portuguese BERT are available as well. Our winning solution makes use of these large language models trained on specific languages. Language Identification Models Many multilingual language identification models have been developed in order to classify the language of the input sentence beforehand. Even though the initial works used n-gram models and generative mixture models or even conditional random fields and other classical machine learning methods like naive bayes , modern methods have shifted to the use of deep learning for language identification . Recent works have mainly focused on deep learning based language identification, where handling codemixed data is a big challenge in the domain. For our experiments, we use a version of XLM-RoBERTa finetuned on a language identification dataset 2 . This model has near-perfect test accuracy of 99.6%. Dialect Classification Dialect classification has been previously solved using statistical methods like Gaussian Mixture Models and Frame Selection Decoding or Support Vector Machines (SVM) . It has been explored relatively sparsely, mostly in the case for local languages . Deep learning approaches have been explored in previous editions of the VarDial workshop shared tasks and otherwise . Dialect classification was also explored previously as a part of other shared tasks . We want to stress that given the multilingual nature of the dataset, using the present methods directly was not an option. In our work, although we take inspiration from the previous works, we propose a novel system that surpasses the performance of the previous systems by a large margin. Data The dataset We observed that the class PT-BR had the most number of samples (2,724) and the class EN had the least number of samples (349), and thus the imbalance ratio was almost 1:8. We have illustrated the data distribution in Figure . We tried to mitigate this imbalance using over-sampling and weighted sampling methods. However, the improved data sampling method did not affect the performance. System Description This was a problem of multi-class classification having 9 classes for Track-1 and 6 classes for Track-2. The samples were belonging to 3 languages having 3 varieties each, so the classification pipeline was made in 2 stages. The Language Identification (LID) model which is the first stage classifies the sentence into 3 languages: English (EN), Spanish (ES) and Portuguese (PT). The LID is a pretrained XLM-RoBERTa that is fine-tuned for the task of language identification. It is able to classify the input sentence into 20 languages. We classify and separate the samples according to their language. The samples corresponding to the specific languages are then fed into the language specific models for dialect identification. For dialect identification we have used models like BERT and RoBERTa with a linear layer connected to the pooler output of the models. Then fine-tuning is done on the models for dialect identification using the samples corresponding to the specific languages. For the task of dialect identification we experimented with several pretrained models like XLM-RoBERTa, BERT, ELECTRA, GPT-2 and RoBERTa. All models were fine-tuned for 20 epochs with a learning rate of 1e-6 and weight decay 1e-6 with a batch size of 8. The best performing model checkpoint was chosen according to the epoch-wise validation macro-F1 score. 5 Experiments and Results Experiments using Large Language Models For the task of Dialect Identification we have tried various language specific models like XLM-RoBERTa, BERT, ELECTRA, RoBERTa and GPT- 2. The base variant of all these models were used and all the models were used through the Hugging-Face library. The pooler output of these models was passed through a linear layer and the models were fine-tuned. First, we experimented with different models for Track-1. All the models were trained for 20 epochs with learning rate 1e-6, weight decay 1e-6 and a batch size of 8. We used XLM-RoBERTa as the baseline for all 3 languages. The best performing models for the English language were RoBERTa and BERT whereas GPT-2 was the worst performing. Similarly the language specific versions of RoBERTa and BERT performed well for the Spanish and Portuguese respectively. Overall the worst performing model was GPT-2 across all 3 languages. The validation F1 scores are present in Table . The two best-performing models for every language were chosen for Track-2. The same procedure as specified above was used and the F1 scores are present in Table . The train and validation F1 scores for 2-class classification are higher for all models as compared to the F1 score of the same models for 3-class classification. This was mainly due to the poor representation and accuracy of classification of the third class. We observed symptoms of overfitting in all models after 12-15 epochs and the best validation F1 score was obtained in the range of 4-8 epochs. LID experiments The pipeline for dialect identification is divided into two parts as the sentences in the dataset belong to different languages. The stages are described in Section 4. The XLM-RoBERTa we have used for language classification has a test accuracy of 99.6% meaning it correctly classifies all input sentences and hence, can be considered as a perfect classifier. For the final pipeline we experimented using the two best performing models for each language in Track-1 and Track-2. For both the tracks we experimented with all 8 (2 3 ) possible combinations of models and calculated the validation F1 score for the combined validation dataset which had sentences belonging to all languages. The validation scores for Track-1 and Track-2 are shown in Table and Table respectively. For both the tracks, the three pipelines with the best validation F1 scores were chosen for submission. Using 3-way classifier as a 2-way classifier In Track-1, participants are expected to train a classifier which classifies amongst 9 classes, and in Track-2, participants are expected to train a classifier which classifies amongst 6 classes. These 6 classes are a proper subset of the 9 classes from Track-1. Thus, an intuitive baseline for Track-2 is to use the model finetuned for Track-1, whilst considering only the relevant classes for the latter task. The classes EN , ES and P T , i.e. the classes without any national dialect associated with them are not included in Track-2 as compared to Track-1. Thus, we calculate the predictions for the Track-2 validation dataset using the models for Track-1 and exclude the metrics for Track-1 specific classes to get the metrics for this "adapted" 2-way classification. We show the results of this experiment in Table and observe that, as expected, the adapted 2-way classification performs worse compared to the explicitly finetuned variant. Results for Track-1 and Track-2 We now present our experiments and their performance for both tracks. Our experiments for Track-1 are described in Table and our experiments for Track-2 are described in Table . The participants were allowed three submissions for evaluation on the test set, so we submitted predictions using the three systems which performed the best on the validation set. As mentioned in Section 5.2, we performed 2 3 , i.e. a total of 8 experiments using the two best models for each language. We observed that RoBERTa base on English, Spanish BERT base on Spanish and Portuguese BERT base performed the best on the testing set for Track-1. The same combination, with RoBERTa base for English, worked best for Track-2. All of our submissions were the top submissions for each track, which surpassed the next best competitors by a margin of 4.5% and 5.6% for Track-1 and Track-2 respectively. Ablation of best submissions We hereby make some observations of our submissions and other experiments. To assist this, we plot the confusion matrices of our best submissions for Track-1 and Track-2 in Figures respectively. Note that these confusion matrices have their rows (i.e. true labels axes) normalized according to the number of samples in the class. Here are observations from our experiments: 1. BERT-based models outperform other models across all languages: We observe that BERT-based models outperform ELECTRA-based and GPT-2-based models, as shown in Table . We speculate this is because of the inherent architecture of BERT, which combines semantic learning with knowledge retention. This combination of traits is particularly useful for this task. 2. Common labels perform the worst across all languages: We observe that the common labels EN , ES and P T perform the worst, both in the individual as well as the two-stage setup. We hypothesize this is because of the absence of dialect specific words, or words that are specific to the geographical origin of the national dialect (for example, "Yankees" for EN-US and "Oxford" for EN-GB). 3. English models work better than models of other languages: It can be noted from Figures 4 and 3 that the English models have the best performance across all classes. This can be attributed to two reasons: absence of national dialect specific words and lesser pretraining data in the case of Portuguese. 4. British English is most correctly classified class: We can observe that the Spanish or Portuguese models make equal number of mistakes in the case of either national dialect, in the case of Track-2 (see Figure ). However, in the case of English, the label EN − GB is correctly classified for more than 95% of the cases. We speculate this is because British English involves slightly distinctive grammar and semantics, which help the model separate it from other classes. 5. The proposed 2-step method is scalable for multiple language dialect classification: We can strongly assert that the novel 2-step deep learning method for multilingual dialect classification is a scalable method for the task due to two specific reasons: firstly, the multilingual models (like XLM-RoBERTa) might not have the vocabulary as well as the learning capabilities to learn the minute differences between individual dialects. Secondly, this system can be quickly expanded for a new language by simply adding a language specific dialect classifier, provided the language identification model supports that particular language. Conclusion In this paper we propose a two-stage classification pipeline for dialect identification for multilingual corpora. We conduct thorough ablations on this setup and provide valuable insights. We foresee multiple future directions for this work. The first is to expand this work to many languages and dialects. Secondly, it is a worthwhile research direction to distill this multi-model setup into a single model with multiple prediction heads. The obvious limitation of this system is the excessive memory consumption due to the usage of language specific models. For low resource languages this system is difficult to train and scale. We hope that these problems will be addressed by researchers in future works. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What was the best performing model for the Spanish language in Track-1? Only give me the answer and do not output any other words.
Answer:
[ "The best performing model for the Spanish language in Track-1 was Spanish BERT." ]
276
3,132
single_doc_qa
2eb77557d9524077904065cb67e5e6d9
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Semantic Role Labeling (SRL) has emerged as an important task in Natural Language Processing (NLP) due to its applicability in information extraction, question answering, and other NLP tasks. SRL is the problem of finding predicate-argument structure in a sentence, as illustrated below: INLINEFORM0 Here, the predicate WRITE has two arguments: `Mike' as A0 or the writer, and `a book' as A1 or the thing written. The labels A0 and A1 correspond to the PropBank annotations BIBREF0 . As the need for SRL arises in different domains and languages, the existing manually annotated corpora become insufficient to build supervised systems. This has motivated work on unsupervised SRL BIBREF1 , BIBREF2 , BIBREF3 . Previous work has indicated that unsupervised systems could benefit from the word alignment information in parallel text in two or more languages BIBREF4 , BIBREF5 , BIBREF6 . For example, consider the German translation of sentence INLINEFORM0 : INLINEFORM0 If sentences INLINEFORM0 and INLINEFORM1 have the word alignments: Mike-Mike, written-geschrieben, and book-Buch, the system might be able to predict A1 for Buch, even if there is insufficient information in the monolingual German data to learn this assignment. Thus, in languages where the resources are sparse or not good enough, or the distributions are not informative, SRL systems could be made more accurate by using parallel data with resource rich or more amenable languages. In this paper, we propose a joint Bayesian model for unsupervised semantic role induction in multiple languages. The model consists of individual Bayesian models for each language BIBREF3 , and crosslingual latent variables to incorporate soft role agreement between aligned constituents. This latent variable approach has been demonstrated to increase the performance in a multilingual unsupervised part-of-speech tagging model based on HMMs BIBREF4 . We investigate the application of this approach to unsupervised SRL, presenting the performance improvements obtained in different settings involving labeled and unlabeled data, and analyzing the annotation effort required to obtain similar gains using labeled data. We begin by briefly describing the unsupervised SRL pipeline and the monolingual semantic role induction model we use, and then describe our multilingual model. Unsupervised SRL Pipeline As established in previous work BIBREF7 , BIBREF8 , we use a standard unsupervised SRL setup, consisting of the following steps: The task we model, unsupervised semantic role induction, is the step 4 of this pipeline. Monolingual Model We use the Bayesian model of garg2012unsupervised as our base monolingual model. The semantic roles are predicate-specific. To model the role ordering and repetition preferences, the role inventory for each predicate is divided into Primary and Secondary roles as follows: For example, the complete role sequence in a frame could be: INLINEFORM0 INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 INLINEFORM9 . The ordering is defined as the sequence of PRs, INLINEFORM10 INLINEFORM11 , INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 INLINEFORM16 . Each pair of consecutive PRs in an ordering is called an interval. Thus, INLINEFORM17 is an interval that contains two SRs, INLINEFORM18 and INLINEFORM19 . An interval could also be empty, for instance INLINEFORM20 contains no SRs. When we evaluate, these roles get mapped to gold roles. For instance, the PR INLINEFORM21 could get mapped to a core role like INLINEFORM22 , INLINEFORM23 , etc. or to a modifier role like INLINEFORM24 , INLINEFORM25 , etc. garg2012unsupervised reported that, in practice, PRs mostly get mapped to core roles and SRs to modifier roles, which conforms to the linguistic motivations for this distinction. Figure FIGREF16 illustrates two copies of the monolingual model, on either side of the crosslingual latent variables. The generative process is as follows: All the multinomial and binomial distributions have symmetric Dirichlet and beta priors respectively. Figure FIGREF7 gives the probability equations for the monolingual model. This formulation models the global role ordering and repetition preferences using PRs, and limited context for SRs using intervals. Ordering and repetition information was found to be helpful in supervised SRL as well BIBREF9 , BIBREF8 , BIBREF10 . More details, including the motivations behind this model, are in BIBREF3 . Multilingual Model The multilingual model uses word alignments between sentences in a parallel corpus to exploit role correspondences across languages. We make copies of the monolingual model for each language and add additional crosslingual latent variables (CLVs) to couple the monolingual models, capturing crosslingual semantic role patterns. Concretely, when training on parallel sentences, whenever the head words of the arguments are aligned, we add a CLV as a parent of the two corresponding role variables. Figure FIGREF16 illustrates this model. The generative process, as explained below, remains the same as the monolingual model for the most part, with the exception of aligned roles which are now generated by both the monolingual process as well as the CLV. Every predicate-tuple has its own inventory of CLVs specific to that tuple. Each CLV INLINEFORM0 is a multi-valued variable where each value defines a distribution over role labels for each language (denoted by INLINEFORM1 above). These distributions over labels are trained to be peaky, so that each value INLINEFORM2 for a CLV represents a correlation between the labels that INLINEFORM3 predicts in the two languages. For example, a value INLINEFORM4 for the CLV INLINEFORM5 might give high probabilities to INLINEFORM6 and INLINEFORM7 in language 1, and to INLINEFORM8 in language 2. If INLINEFORM9 is the only value for INLINEFORM10 that gives high probability to INLINEFORM11 in language 1, and the monolingual model in language 1 decides to assign INLINEFORM12 to the role for INLINEFORM13 , then INLINEFORM14 will predict INLINEFORM15 in language 2, with high probability. We generate the CLVs via a Chinese Restaurant Process BIBREF11 , a non-parametric Bayesian model, which allows us to induce the number of CLVs for every predicate-tuple from the data. We continue to train on the non-parallel sentences using the respective monolingual models. The multilingual model is deficient, since the aligned roles are being generated twice. Ideally, we would like to add the CLV as additional conditioning variables in the monolingual models. The new joint probability can be written as equation UID11 (Figure FIGREF7 ), which can be further decomposed following the decomposition of the monolingual model in Figure FIGREF7 . However, having this additional conditioning variable breaks the Dirichlet-multinomial conjugacy, which makes it intractable to marginalize out the parameters during inference. Hence, we use an approximation where we treat each of the aligned roles as being generated twice, once by the monolingual model and once by the corresponding CLV (equation ). This is the first work to incorporate the coupling of aligned arguments directly in a Bayesian SRL model. This makes it easier to see how to extend this model in a principled way to incorporate additional sources of information. First, the model scales gracefully to more than two languages. If there are a total of INLINEFORM0 languages, and there is an aligned argument in INLINEFORM1 of them, the multilingual latent variable is connected to only those INLINEFORM2 aligned arguments. Second, having one joint Bayesian model allows us to use the same model in various semi-supervised learning settings, just by fixing the annotated variables during training. Section SECREF29 evaluates a setting where we have some labeled data in one language (called source), while no labeled data in the second language (called target). Note that this is different from a classic annotation projection setting (e.g. BIBREF12 ), where the role labels are mapped from source constituents to aligned target constituents. Inference and Training The inference problem consists of predicting the role labels and CLVs (the hidden variables) given the predicate, its voice, and syntactic features of all the identified arguments (the visible variables). We use a collapsed Gibbs-sampling based approach to generate samples for the hidden variables (model parameters are integrated out). The sample counts and the priors are then used to calculate the MAP estimate of the model parameters. For the monolingual model, the role at a given position is sampled as: DISPLAYFORM0 where the subscript INLINEFORM0 refers to all the variables except at position INLINEFORM1 , INLINEFORM2 refers to the variables in all the training instances except the current one, and INLINEFORM3 refers to all the model parameters. The above integral has a closed form solution due to Dirichlet-multinomial conjugacy. For sampling roles in the multilingual model, we also need to consider the probabilities of roles being generated by the CLVs: DISPLAYFORM0 For sampling CLVs, we need to consider three factors: two corresponding to probabilities of generating the aligned roles, and the third one corresponding to selecting the CLV according to CRP. DISPLAYFORM0 where the aligned roles INLINEFORM0 and INLINEFORM1 are connected to INLINEFORM2 , and INLINEFORM3 refers to all the variables except INLINEFORM4 , INLINEFORM5 , and INLINEFORM6 . We use the trained parameters to parse the monolingual data using the monolingual model. The crosslingual parameters are ignored even if they were used during training. Thus, the information coming from the CLVs acts as a regularizer for the monolingual models. Evaluation Following the setting of titovcrosslingual, we evaluate only on the arguments that were correctly identified, as the incorrectly identified arguments do not have any gold semantic labels. Evaluation is done using the metric proposed by lang2011unsupervised, which has 3 components: (i) Purity (PU) measures how well an induced cluster corresponds to a single gold role, (ii) Collocation (CO) measures how well a gold role corresponds to a single induced cluster, and (iii) F1 is the harmonic mean of PU and CO. For each predicate, let INLINEFORM0 denote the total number of argument instances, INLINEFORM1 the instances in the induced cluster INLINEFORM2 , and INLINEFORM3 the instances having label INLINEFORM4 in gold annotations. INLINEFORM5 , INLINEFORM6 , and INLINEFORM7 . The score for each predicate is weighted by the number of its argument instances, and a weighted average is computed over all the predicates. Baseline We use the same baseline as used by lang2011unsupervised which has been shown to be difficult to outperform. This baseline assigns a semantic role to a constituent based on its syntactic function, i.e. the dependency relation to its head. If there is a total of INLINEFORM0 clusters, INLINEFORM1 most frequent syntactic functions get a cluster each, and the rest are assigned to the INLINEFORM2 th cluster. Closest Previous Work This work is closely related to the cross-lingual unsupervised SRL work of titovcrosslingual. Their model has separate monolingual models for each language and an extra penalty term which tries to maximize INLINEFORM0 and INLINEFORM1 i.e. for all the aligned arguments with role label INLINEFORM2 in language 1, it tries to find a role label INLINEFORM3 in language 2 such that the given proportion is maximized and vice verse. However, there is no efficient way to optimize the objective with this penalty term and the authors used an inference method similar to annotation projection. Further, the method does not scale naturally to more than two languages. Their algorithm first does monolingual inference in one language ignoring the penalty and then does the inference in the second language taking into account the penalty term. In contrast, our model adds the latent variables as a part of the model itself, and not an external penalty, which enables us to use the standard Bayesian learning methods such as sampling. The monolingual model we use BIBREF3 also has two main advantages over titovcrosslingual. First, the former incorporates a global role ordering probability that is missing in the latter. Secondly, the latter defines argument-keys as a tuple of four syntactic features and all the arguments having the same argument-keys are assigned the same role. This kind of hard clustering is avoided in the former model where two constituents having the same set of features might get assigned different roles if they appear in different contexts. Data Following titovcrosslingual, we run our experiments on the English (EN) and German (DE) sections of the CoNLL 2009 corpus BIBREF13 , and EN-DE section of the Europarl corpus BIBREF14 . We get about 40k EN and 36k DE sentences from the CoNLL 2009 training set, and about 1.5M parallel EN-DE sentences from Europarl. For appropriate comparison, we keep the same setting as in BIBREF6 for automatic parses and argument identification, which we briefly describe here. The EN sentences are parsed syntactically using MaltParser BIBREF15 and DE using LTH parser BIBREF16 . All the non-auxiliary verbs are selected as predicates. In CoNLL data, this gives us about 3k EN and 500 DE predicates. The total number of predicate instances are 3.4M in EN (89k CoNLL + 3.3M Europarl) and 2.62M in DE (17k CoNLL + 2.6M Europarl). The arguments for EN are identified using the heuristics proposed by lang2011unsupervised. However, we get an F1 score of 85.1% for argument identification on CoNLL 2009 EN data as opposed to 80.7% reported by titovcrosslingual. This could be due to implementation differences, which unfortunately makes our EN results incomparable. For DE, the arguments are identified using the LTH system BIBREF16 , which gives an F1 score of 86.5% on the CoNLL 2009 DE data. The word alignments for the EN-DE parallel Europarl corpus are computed using GIZA++ BIBREF17 . For high-precision, only the intersecting alignments in the two directions are kept. We define two semantic arguments as aligned if their head-words are aligned. In total we get 9.3M arguments for EN (240k CoNLL + 9.1M Europarl) and 4.43M for DE (32k CoNLL + 4.4M Europarl). Out of these, 0.76M arguments are aligned. Main Results Since the CoNLL annotations have 21 semantic roles in total, we use 21 roles in our model as well as the baseline. Following garg2012unsupervised, we set the number of PRs to 2 (excluding INLINEFORM0 , INLINEFORM1 and INLINEFORM2 ), and SRs to 21-2=19. Table TABREF27 shows the results. In the first setting (Line 1), we train and test the monolingual model on the CoNLL data. We observe significant improvements in F1 score over the Baseline (Line 0) in both languages. Using the CoNLL 2009 dataset alone, titovcrosslingual report an F1 score of 80.9% (PU=86.8%, CO=75.7%) for German. Thus, our monolingual model outperforms their monolingual model in German. For English, they report an F1 score of 83.6% (PU=87.5%, CO=80.1%), but note that our English results are not directly comparable to theirs due to differences argument identification, as discussed in section SECREF25 . As their argument identification score is lower, perhaps their system is discarding “difficult” arguments which leads to a higher clustering score. In the second setting (Line 2), we use the additional monolingual Europarl (EP) data for training. We get equivalent results in English and a significant improvement in German compared to our previous setting (Line 1). The German dataset in CoNLL is quite small and benefits from the additional EP training data. In contrast, the English model is already quite good due to a relatively big dataset from CoNLL, and good accuracy syntactic parsers. Unfortunately, titovcrosslingual do not report results with this setting. The third setting (Line 3) gives the results of our multilingual model, which adds the word alignments in the EP data. Comparing with Line 2, we get non-significant improvements in both languages. titovcrosslingual obtain an F1 score of 82.7% (PU=85.0%, CO=80.6%) for German, and 83.7% (PU=86.8%, CO=80.7%) for English. Thus, for German, our multilingual Bayesian model is able to capture the cross-lingual patterns at least as well as the external penalty term in BIBREF6 . We cannot compare the English results unfortunately due to differences in argument identification. We also compared monolingual and bilingual training data using a setting that emulates the standard supervised setup of separate training and test data sets. We train only on the EP dataset and test on the CoNLL dataset. Lines 4 and 5 of Table TABREF27 give the results. The multilingual model obtains small improvements in both languages, which confirms the results from the standard unsupervised setup, comparing lines 2 to 3. These results indicate that little information can be learned about semantic roles from this parallel data setup. One possible explanation for this result is that the setup itself is inadequate. Given the definition of aligned arguments, only 8% of English arguments and 17% of German arguments are aligned. This plus our experiments suggest that improving the alignment model is a necessary step to making effective use of parallel data in multilingual SRI, for example by joint modeling with SRI. We leave this exploration to future work. Multilingual Training with Labeled Data for One Language Another motivation for jointly modeling SRL in multiple languages is the transfer of information from a resource rich language to a resource poor language. We evaluated our model in a very general annotation transfer scenario, where we have a small labeled dataset for one language (source), and a large parallel unlabeled dataset for the source and another (target) language. We investigate whether this setting improves the parameter estimates for the target language. To this end, we clamp the role annotations of the source language in the CoNLL dataset using a predefined mapping, and do not sample them during training. This data gives us good parameters for the source language, which are used to sample the roles of the source language in the unlabeled Europarl data. The CLVs aim to capture this improvement and thereby improve sampling and parameter estimates for the target language. Table TABREF28 shows the results of this experiment. We obtain small improvements in the target languages. As in the unsupervised setting, the small percentage of aligned roles probably limits the impact of the cross-lingual information. Labeled Data in Monolingual Model We explored the improvement in the monolingual model in a semi-supervised setting. To this end, we randomly selected INLINEFORM0 of the sentences in the CoNLL dataset as “supervised sentences” and the rest INLINEFORM1 were kept unsupervised. Next, we clamped the role labels of the supervised sentences using the predefined mapping from Section SECREF29 . Sampling was done on the unsupervised sentences as usual. We then measured the clustering performance using the trained parameters. To access the contribution of partial supervision better, we constructed a “supervised baseline” as follows. For predicates seen in the supervised sentences, a MAP estimate of the parameters was calculated using the predefined mapping. For the unseen predicates, the standard baseline was used. Figures FIGREF33 and FIGREF33 show the performance variation with INLINEFORM0 . We make the following observations: [leftmargin=*] In both languages, at around INLINEFORM0 , the supervised baseline starts outperforming the semi-supervised model, which suggests that manually labeling about 10% of the sentences is a good enough alternative to our training procedure. Note that 10% amounts to about 3.6k sentences in German and 4k in English. We noticed that the proportion of seen predicates increases dramatically as we increase the proportion of supervised sentences. At 10% supervised sentences, the model has already seen 63% of predicates in German and 44% in English. This explains to some extent why only 10% labeled sentences are enough. For German, it takes about 3.5% or 1260 supervised sentences to have the same performance increase as 1.5M unlabeled sentences (Line 1 to Line 2 in Table TABREF27 ). Adding about 180 more supervised sentences also covers the benefit obtained by alignments in the multilingual model (Line 2 to Line 3 in Table TABREF27 ). There is no noticeable performance difference in English. We also evaluated the performance variation on a completely unseen CoNLL test set. Since the test set is very small compared to the training set, the clustering evaluation is not as reliable. Nonetheless, we broadly obtained the same pattern. Related Work As discussed in section SECREF24 , our work is closely related to the crosslingual unsupervised SRL work of titovcrosslingual. The idea of using superlingual latent variables to capture cross-lingual information was proposed for POS tagging by naseem2009multilingual, which we use here for SRL. In a semi-supervised setting, pado2009cross used a graph based approach to transfer semantic role annotations from English to German. furstenau2009graph used a graph alignment method to measure the semantic and syntactic similarity between dependency tree arguments of known and unknown verbs. For monolingual unsupervised SRL, swier2004unsupervised presented the first work on a domain-general corpus, the British National Corpus, using 54 verbs taken from VerbNet. garg2012unsupervised proposed a Bayesian model for this problem that we use here. titov2012bayesian also proposed a closely related Bayesian model. grenager2006unsupervised proposed a generative model but their parameter space consisted of all possible linkings of syntactic constituents and semantic roles, which made unsupervised learning difficult and a separate language-specific rule based method had to be used to constrain this space. Other proposed models include an iterative split-merge algorithm BIBREF18 and a graph-partitioning based approach BIBREF1 . marquez2008semantic provide a good overview of the supervised SRL systems. Conclusions We propose a Bayesian model of semantic role induction (SRI) that uses crosslingual latent variables to capture role alignments in parallel corpora. The crosslingual latent variables capture correlations between roles in different languages, and regularize the parameter estimates of the monolingual models. Because this is a joint Bayesian model of multilingual SRI, we can apply the same model to a variety of training scenarios just by changing the inference procedure appropriately. We evaluate monolingual SRI with a large unlabeled dataset, bilingual SRI with a parallel corpus, bilingual SRI with annotations available for the source language, and monolingual SRI with a small labeled dataset. Increasing the amount of monolingual unlabeled data significantly improves SRI in German but not in English. Adding word alignments in parallel sentences results in small, non significant improvements, even if there is some labeled data available in the source language. This difficulty in showing the usefulness of parallel corpora for SRI may be due to the current assumptions about role alignments, which mean that only a small percentage of roles are aligned. Further analyses reveals that annotating small amounts of data can easily outperform the performance gains obtained by adding large unlabeled dataset as well as adding parallel corpora. Future work includes training on different language pairs, on more than two languages, and with more inclusive models of role alignment. Acknowledgments This work was funded by the Swiss NSF grant 200021_125137 and EC FP7 grant PARLANCE. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What does an individual model consist of? Only give me the answer and do not output any other words.
Answer:
[ "Bayesian model of garg2012unsupervised as our base monolingual model" ]
276
5,117
single_doc_qa
a7f1b1932721494ab355fa3f031aa41d
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Abusive language refers to any type of insult, vulgarity, or profanity that debases the target; it also can be anything that causes aggravation BIBREF0 , BIBREF1 . Abusive language is often reframed as, but not limited to, offensive language BIBREF2 , cyberbullying BIBREF3 , othering language BIBREF4 , and hate speech BIBREF5 . Recently, an increasing number of users have been subjected to harassment, or have witnessed offensive behaviors online BIBREF6 . Major social media companies (i.e. Facebook, Twitter) have utilized multiple resources—artificial intelligence, human reviewers, user reporting processes, etc.—in effort to censor offensive language, yet it seems nearly impossible to successfully resolve the issue BIBREF7 , BIBREF8 . The major reason of the failure in abusive language detection comes from its subjectivity and context-dependent characteristics BIBREF9 . For instance, a message can be regarded as harmless on its own, but when taking previous threads into account it may be seen as abusive, and vice versa. This aspect makes detecting abusive language extremely laborious even for human annotators; therefore it is difficult to build a large and reliable dataset BIBREF10 . Previously, datasets openly available in abusive language detection research on Twitter ranged from 10K to 35K in size BIBREF9 , BIBREF11 . This quantity is not sufficient to train the significant number of parameters in deep learning models. Due to this reason, these datasets have been mainly studied by traditional machine learning methods. Most recently, Founta et al. founta2018large introduced Hate and Abusive Speech on Twitter, a dataset containing 100K tweets with cross-validated labels. Although this corpus has great potential in training deep models with its significant size, there are no baseline reports to date. This paper investigates the efficacy of different learning models in detecting abusive language. We compare accuracy using the most frequently studied machine learning classifiers as well as recent neural network models. Reliable baseline results are presented with the first comparative study on this dataset. Additionally, we demonstrate the effect of different features and variants, and describe the possibility for further improvements with the use of ensemble models. Related Work The research community introduced various approaches on abusive language detection. Razavi et al. razavi2010offensive applied Naïve Bayes, and Warner and Hirschberg warner2012detecting used Support Vector Machine (SVM), both with word-level features to classify offensive language. Xiang et al. xiang2012detecting generated topic distributions with Latent Dirichlet Allocation BIBREF12 , also using word-level features in order to classify offensive tweets. More recently, distributed word representations and neural network models have been widely applied for abusive language detection. Djuric et al. djuric2015hate used the Continuous Bag Of Words model with paragraph2vec algorithm BIBREF13 to more accurately detect hate speech than that of the plain Bag Of Words models. Badjatiya et al. badjatiya2017deep implemented Gradient Boosted Decision Trees classifiers using word representations trained by deep learning models. Other researchers have investigated character-level representations and their effectiveness compared to word-level representations BIBREF14 , BIBREF15 . As traditional machine learning methods have relied on feature engineering, (i.e. n-grams, POS tags, user information) BIBREF1 , researchers have proposed neural-based models with the advent of larger datasets. Convolutional Neural Networks and Recurrent Neural Networks have been applied to detect abusive language, and they have outperformed traditional machine learning classifiers such as Logistic Regression and SVM BIBREF15 , BIBREF16 . However, there are no studies investigating the efficiency of neural models with large-scale datasets over 100K. Methodology This section illustrates our implementations on traditional machine learning classifiers and neural network based models in detail. Furthermore, we describe additional features and variant models investigated. Traditional Machine Learning Models We implement five feature engineering based machine learning classifiers that are most often used for abusive language detection. In data preprocessing, text sequences are converted into Bag Of Words (BOW) representations, and normalized with Term Frequency-Inverse Document Frequency (TF-IDF) values. We experiment with word-level features using n-grams ranging from 1 to 3, and character-level features from 3 to 8-grams. Each classifier is implemented with the following specifications: Naïve Bayes (NB): Multinomial NB with additive smoothing constant 1 Logistic Regression (LR): Linear LR with L2 regularization constant 1 and limited-memory BFGS optimization Support Vector Machine (SVM): Linear SVM with L2 regularization constant 1 and logistic loss function Random Forests (RF): Averaging probabilistic predictions of 10 randomized decision trees Gradient Boosted Trees (GBT): Tree boosting with learning rate 1 and logistic loss function Neural Network based Models Along with traditional machine learning approaches, we investigate neural network based models to evaluate their efficacy within a larger dataset. In particular, we explore Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and their variant models. A pre-trained GloVe BIBREF17 representation is used for word-level features. CNN: We adopt Kim's kim2014convolutional implementation as the baseline. The word-level CNN models have 3 convolutional filters of different sizes [1,2,3] with ReLU activation, and a max-pooling layer. For the character-level CNN, we use 6 convolutional filters of various sizes [3,4,5,6,7,8], then add max-pooling layers followed by 1 fully-connected layer with a dimension of 1024. Park and Fung park2017one proposed a HybridCNN model which outperformed both word-level and character-level CNNs in abusive language detection. In order to evaluate the HybridCNN for this dataset, we concatenate the output of max-pooled layers from word-level and character-level CNN, and feed this vector to a fully-connected layer in order to predict the output. All three CNN models (word-level, character-level, and hybrid) use cross entropy with softmax as their loss function and Adam BIBREF18 as the optimizer. RNN: We use bidirectional RNN BIBREF19 as the baseline, implementing a GRU BIBREF20 cell for each recurrent unit. From extensive parameter-search experiments, we chose 1 encoding layer with 50 dimensional hidden states and an input dropout probability of 0.3. The RNN models use cross entropy with sigmoid as their loss function and Adam as the optimizer. For a possible improvement, we apply a self-matching attention mechanism on RNN baseline models BIBREF21 so that they may better understand the data by retrieving text sequences twice. We also investigate a recently introduced method, Latent Topic Clustering (LTC) BIBREF22 . The LTC method extracts latent topic information from the hidden states of RNN, and uses it for additional information in classifying the text data. Feature Extension While manually analyzing the raw dataset, we noticed that looking at the tweet one has replied to or has quoted, provides significant contextual information. We call these, “context tweets". As humans can better understand a tweet with the reference of its context, our assumption is that computers also benefit from taking context tweets into account in detecting abusive language. As shown in the examples below, (2) is labeled abusive due to the use of vulgar language. However, the intention of the user can be better understood with its context tweet (1). (1) I hate when I'm sitting in front of the bus and somebody with a wheelchair get on. INLINEFORM0 (2) I hate it when I'm trying to board a bus and there's already an as**ole on it. Similarly, context tweet (3) is important in understanding the abusive tweet (4), especially in identifying the target of the malice. (3) Survivors of #Syria Gas Attack Recount `a Cruel Scene'. INLINEFORM0 (4) Who the HELL is “LIKE" ING this post? Sick people.... Huang et al. huang2016modeling used several attributes of context tweets for sentiment analysis in order to improve the baseline LSTM model. However, their approach was limited because the meta-information they focused on—author information, conversation type, use of the same hashtags or emojis—are all highly dependent on data. In order to avoid data dependency, text sequences of context tweets are directly used as an additional feature of neural network models. We use the same baseline model to convert context tweets to vectors, then concatenate these vectors with outputs of their corresponding labeled tweets. More specifically, we concatenate max-pooled layers of context and labeled tweets for the CNN baseline model. As for RNN, the last hidden states of context and labeled tweets are concatenated. Dataset Hate and Abusive Speech on Twitter BIBREF10 classifies tweets into 4 labels, “normal", “spam", “hateful" and “abusive". We were only able to crawl 70,904 tweets out of 99,996 tweet IDs, mainly because the tweet was deleted or the user account had been suspended. Table shows the distribution of labels of the crawled data. Data Preprocessing In the data preprocessing steps, user IDs, URLs, and frequently used emojis are replaced as special tokens. Since hashtags tend to have a high correlation with the content of the tweet BIBREF23 , we use a segmentation library BIBREF24 for hashtags to extract more information. For character-level representations, we apply the method Zhang et al. zhang2015character proposed. Tweets are transformed into one-hot encoded vectors using 70 character dimensions—26 lower-cased alphabets, 10 digits, and 34 special characters including whitespace. Training and Evaluation In training the feature engineering based machine learning classifiers, we truncate vector representations according to the TF-IDF values (the top 14,000 and 53,000 for word-level and character-level representations, respectively) to avoid overfitting. For neural network models, words that appear only once are replaced as unknown tokens. Since the dataset used is not split into train, development, and test sets, we perform 10-fold cross validation, obtaining the average of 5 tries; we divide the dataset randomly by a ratio of 85:5:10, respectively. In order to evaluate the overall performance, we calculate the weighted average of precision, recall, and F1 scores of all four labels, “normal”, “spam”, “hateful”, and “abusive”. Empirical Results As shown in Table , neural network models are more accurate than feature engineering based models (i.e. NB, SVM, etc.) except for the LR model—the best LR model has the same F1 score as the best CNN model. Among traditional machine learning models, the most accurate in classifying abusive language is the LR model followed by ensemble models such as GBT and RF. Character-level representations improve F1 scores of SVM and RF classifiers, but they have no positive effect on other models. For neural network models, RNN with LTC modules have the highest accuracy score, but there are no significant improvements from its baseline model and its attention-added model. Similarly, HybridCNN does not improve the baseline CNN model. For both CNN and RNN models, character-level features significantly decrease the accuracy of classification. The use of context tweets generally have little effect on baseline models, however they noticeably improve the scores of several metrics. For instance, CNN with context tweets score the highest recall and F1 for “hateful" labels, and RNN models with context tweets have the highest recall for “abusive" tweets. Discussion and Conclusion While character-level features are known to improve the accuracy of neural network models BIBREF16 , they reduce classification accuracy for Hate and Abusive Speech on Twitter. We conclude this is because of the lack of labeled data as well as the significant imbalance among the different labels. Unlike neural network models, character-level features in traditional machine learning classifiers have positive results because we have trained the models only with the most significant character elements using TF-IDF values. Variants of neural network models also suffer from data insufficiency. However, these models show positive performances on “spam" (14%) and “hateful" (4%) tweets—the lower distributed labels. The highest F1 score for “spam" is from the RNN-LTC model (0.551), and the highest for “hateful" is CNN with context tweets (0.309). Since each variant model excels in different metrics, we expect to see additional improvements with the use of ensemble models of these variants in future works. In this paper, we report the baseline accuracy of different learning models as well as their variants on the recently introduced dataset, Hate and Abusive Speech on Twitter. Experimental results show that bidirectional GRU networks with LTC provide the most accurate results in detecting abusive language. Additionally, we present the possibility of using ensemble models of variant models and features for further improvements. Acknowledgments K. Jung is with the Department of Electrical and Computer Engineering, ASRI, Seoul National University, Seoul, Korea. This work was supported by the National Research Foundation of Korea (NRF) funded by the Korea government (MSIT) (No. 2016M3C4A7952632), the Technology Innovation Program (10073144) funded by the Ministry of Trade, Industry & Energy (MOTIE, Korea). We would also like to thank Yongkeun Hwang and Ji Ho Park for helpful discussions and their valuable insights. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What additional features and context are proposed? Only give me the answer and do not output any other words.
Answer:
[ "using tweets that one has replied or quoted to as contextual information", "text sequences of context tweets" ]
276
2,817
single_doc_qa
57e1e9d0bf0a462dad279e0ffb1a7984
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Question Generation (QG) is the task of automatically creating questions from a range of inputs, such as natural language text BIBREF0, knowledge base BIBREF1 and image BIBREF2. QG is an increasingly important area in NLP with various application scenarios such as intelligence tutor systems, open-domain chatbots and question answering dataset construction. In this paper, we focus on question generation from reading comprehension materials like SQuAD BIBREF3. As shown in Figure FIGREF1, given a sentence in the reading comprehension paragraph and the text fragment (i.e., the answer) that we want to ask about, we aim to generate a question that is asked about the specified answer. Question generation for reading comprehension is firstly formalized as a declarative-to-interrogative sentence transformation problem with predefined rules or templates BIBREF4, BIBREF0. With the rise of neural models, Du2017LearningTA propose to model this task under the sequence-to-sequence (Seq2Seq) learning framework BIBREF5 with attention mechanism BIBREF6. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence. Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer, a contiguous span inside the input sentence, is already known before question generation. To capture answer-relevant words in the sentence, they adopt a BIO tagging scheme to incorporate the answer position embedding in Seq2Seq learning. Furthermore, Sun2018AnswerfocusedAP propose that tokens close to the answer fragments are more likely to be answer-relevant. Therefore, they explicitly encode the relative distance between sentence words and the answer via position embedding and position-aware attention. Although existing proximity-based answer-aware approaches achieve reasonable performance, we argue that such intuition may not apply to all cases especially for sentences with complex structure. For example, Figure FIGREF1 shows such an example where those approaches fail. This sentence contains a few facts and due to the parenthesis (i.e. “the area's coldest month”), some facts intertwine: “The daily mean temperature in January is 0.3$^\circ $C” and “January is the area's coldest month”. From the question generated by a proximity-based answer-aware baseline, we find that it wrongly uses the word “coldest” but misses the correct word “mean” because “coldest” has a shorter distance to the answer “0.3$^\circ $C”. In summary, their intuition that “the neighboring words of the answer are more likely to be answer-relevant and have a higher chance to be used in the question” is not reliable. To quantitatively show this drawback of these models, we implement the approach proposed by Sun2018AnswerfocusedAP and analyze its performance under different relative distances between the answer and other non-stop sentence words that also appear in the ground truth question. The results are shown in Table TABREF2. We find that the performance drops at most 36% when the relative distance increases from “$0\sim 10$” to “$>10$”. In other words, when the useful context is located far away from the answer, current proximity-based answer-aware approaches will become less effective, since they overly emphasize neighboring words of the answer. To address this issue, we extract the structured answer-relevant relations from sentences and propose a method to jointly model such structured relation and the unstructured sentence for question generation. The structured answer-relevant relation is likely to be to the point context and thus can help keep the generated question to the point. For example, Figure FIGREF1 shows our framework can extract the right answer-relevant relation (“The daily mean temperature in January”, “is”, “32.6$^\circ $F (0.3$^\circ $C)”) among multiple facts. With the help of such structured information, our model is less likely to be confused by sentences with a complex structure. Specifically, we firstly extract multiple relations with an off-the-shelf Open Information Extraction (OpenIE) toolbox BIBREF7, then we select the relation that is most relevant to the answer with carefully designed heuristic rules. Nevertheless, it is challenging to train a model to effectively utilize both the unstructured sentence and the structured answer-relevant relation because both of them could be noisy: the unstructured sentence may contain multiple facts which are irrelevant to the target question, while the limitation of the OpenIE tool may produce less accurate extracted relations. To explore their advantages simultaneously and avoid the drawbacks, we design a gated attention mechanism and a dual copy mechanism based on the encoder-decoder framework, where the former learns to control the information flow between the unstructured and structured inputs, while the latter learns to copy words from two sources to maintain the informativeness and faithfulness of generated questions. In the evaluations on the SQuAD dataset, our system achieves significant and consistent improvement as compared to all baseline methods. In particular, we demonstrate that the improvement is more significant with a larger relative distance between the answer and other non-stop sentence words that also appear in the ground truth question. Furthermore, our model is capable of generating diverse questions for a single sentence-answer pair where the sentence conveys multiple relations of its answer fragment. Framework Description In this section, we first introduce the task definition and our protocol to extract structured answer-relevant relations. Then we formalize the task under the encoder-decoder framework with gated attention and dual copy mechanism. Framework Description ::: Problem Definition We formalize our task as an answer-aware Question Generation (QG) problem BIBREF8, which assumes answer phrases are given before generating questions. Moreover, answer phrases are shown as text fragments in passages. Formally, given the sentence $S$, the answer $A$, and the answer-relevant relation $M$, the task of QG aims to find the best question $\overline{Q}$ such that, where $A$ is a contiguous span inside $S$. Framework Description ::: Answer-relevant Relation Extraction We utilize an off-the-shelf toolbox of OpenIE to the derive structured answer-relevant relations from sentences as to the point contexts. Relations extracted by OpenIE can be represented either in a triple format or in an n-ary format with several secondary arguments, and we employ the latter to keep the extractions as informative as possible and avoid extracting too many similar relations in different granularities from one sentence. We join all arguments in the extracted n-ary relation into a sequence as our to the point context. Figure FIGREF5 shows n-ary relations extracted from OpenIE. As we can see, OpenIE extracts multiple relations for complex sentences. Here we select the most informative relation according to three criteria in the order of descending importance: (1) having the maximal number of overlapped tokens between the answer and the relation; (2) being assigned the highest confidence score by OpenIE; (3) containing maximum non-stop words. As shown in Figure FIGREF5, our criteria can select answer-relevant relations (waved in Figure FIGREF5), which is especially useful for sentences with extraneous information. In rare cases, OpenIE cannot extract any relation, we treat the sentence itself as the to the point context. Table TABREF8 shows some statistics to verify the intuition that the extracted relations can serve as more to the point context. We find that the tokens in relations are 61% more likely to be used in the target question than the tokens in sentences, and thus they are more to the point. On the other hand, on average the sentences contain one more question token than the relations (1.86 v.s. 2.87). Therefore, it is still necessary to take the original sentence into account to generate a more accurate question. Framework Description ::: Our Proposed Model ::: Overview. As shown in Figure FIGREF10, our framework consists offour components (1) Sentence Encoder and Relation Encoder, (2) Decoder, (3) Gated Attention Mechanism and (4) Dual Copy Mechanism. The sentence encoder and relation encoder encode the unstructured sentence and the structured answer-relevant relation, respectively. To select and combine the source information from the two encoders, a gated attention mechanism is employed to jointly attend both contextualized information sources, and a dual copy mechanism copies words from either the sentence or the relation. Framework Description ::: Our Proposed Model ::: Answer-aware Encoder. We employ two encoders to integrate information from the unstructured sentence $S$ and the answer-relevant relation $M$ separately. Sentence encoder takes in feature-enriched embeddings including word embeddings $\mathbf {w}$, linguistic embeddings $\mathbf {l}$ and answer position embeddings $\mathbf {a}$. We follow BIBREF9 to transform POS and NER tags into continuous representation ($\mathbf {l}^p$ and $\mathbf {l}^n$) and adopt a BIO labelling scheme to derive the answer position embedding (B: the first token of the answer, I: tokens within the answer fragment except the first one, O: tokens outside of the answer fragment). For each word $w_i$ in the sentence $S$, we simply concatenate all features as input: $\mathbf {x}_i^s= [\mathbf {w}_i; \mathbf {l}^p_i; \mathbf {l}^n_i; \mathbf {a}_i]$. Here $[\mathbf {a};\mathbf {b}]$ denotes the concatenation of vectors $\mathbf {a}$ and $\mathbf {b}$. We use bidirectional LSTMs to encode the sentence $(\mathbf {x}_1^s, \mathbf {x}_2^s, ..., \mathbf {x}_n^s)$ to get a contextualized representation for each token: where $\overrightarrow{\mathbf {h}}^{s}_i$ and $\overleftarrow{\mathbf {h}}^{s}_i$ are the hidden states at the $i$-th time step of the forward and the backward LSTMs. The output state of the sentence encoder is the concatenation of forward and backward hidden states: $\mathbf {h}^{s}_i=[\overrightarrow{\mathbf {h}}^{s}_i;\overleftarrow{\mathbf {h}}^{s}_i]$. The contextualized representation of the sentence is $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$. For the relation encoder, we firstly join all items in the n-ary relation $M$ into a sequence. Then we only take answer position embedding as an extra feature for the sequence: $\mathbf {x}_i^m= [\mathbf {w}_i; \mathbf {a}_i]$. Similarly, we take another bidirectional LSTMs to encode the relation sequence and derive the corresponding contextualized representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. Framework Description ::: Our Proposed Model ::: Decoder. We use an LSTM as the decoder to generate the question. The decoder predicts the word probability distribution at each decoding timestep to generate the question. At the t-th timestep, it reads the word embedding $\mathbf {w}_{t}$ and the hidden state $\mathbf {u}_{t-1}$ of the previous timestep to generate the current hidden state: Framework Description ::: Our Proposed Model ::: Gated Attention Mechanism. We design a gated attention mechanism to jointly attend the sentence representation and the relation representation. For sentence representation $(\mathbf {h}^{s}_1, \mathbf {h}^{s}_2, ..., \mathbf {h}^{s}_n)$, we employ the Luong2015EffectiveAT's attention mechanism to obtain the sentence context vector $\mathbf {c}^s_t$, where $\mathbf {W}_a$ is a trainable weight. Similarly, we obtain the vector $\mathbf {c}^m_t$ from the relation representation $(\mathbf {h}^{m}_1, \mathbf {h}^{m}_2, ..., \mathbf {h}^{m}_n)$. To jointly model the sentence and the relation, a gating mechanism is designed to control the information flow from two sources: where $\odot $ represents element-wise dot production and $\mathbf {W}_g, \mathbf {W}_h$ are trainable weights. Finally, the predicted probability distribution over the vocabulary $V$ is computed as: where $\mathbf {W}_V$ and $\mathbf {b}_V$ are parameters. Framework Description ::: Our Proposed Model ::: Dual Copy Mechanism. To deal with the rare and unknown words, the decoder applies the pointing method BIBREF10, BIBREF11, BIBREF12 to allow copying a token from the input sentence at the $t$-th decoding step. We reuse the attention score $\mathbf {\alpha }_{t}^s$ and $\mathbf {\alpha }_{t}^m$ to derive the copy probability over two source inputs: Different from the standard pointing method, we design a dual copy mechanism to copy from two sources with two gates. The first gate is designed for determining copy tokens from two sources of inputs or generate next word from $P_V$, which is computed as $g^v_t = \text{sigmoid}(\mathbf {w}^v_g \tilde{\mathbf {h}}_t + b^v_g)$. The second gate takes charge of selecting the source (sentence or relation) to copy from, which is computed as $g^c_t = \text{sigmoid}(\mathbf {w}^c_g [\mathbf {c}_t^s;\mathbf {c}_t^m] + b^c_g)$. Finally, we combine all probabilities $P_V$, $P_S$ and $P_M$ through two soft gates $g^v_t$ and $g^c_t$. The probability of predicting $w$ as the $t$-th token of the question is: Framework Description ::: Our Proposed Model ::: Training and Inference. Given the answer $A$, sentence $S$ and relation $M$, the training objective is to minimize the negative log-likelihood with regard to all parameters: where $\mathcal {\lbrace }Q\rbrace $ is the set of all training instances, $\theta $ denotes model parameters and $\text{log} P(Q|A,S,M;\theta )$ is the conditional log-likelihood of $Q$. In testing, our model targets to generate a question $Q$ by maximizing: Experimental Setting ::: Dataset & Metrics We conduct experiments on the SQuAD dataset BIBREF3. It contains 536 Wikipedia articles and 100k crowd-sourced question-answer pairs. The questions are written by crowd-workers and the answers are spans of tokens in the articles. We employ two different data splits by following Zhou2017NeuralQG and Du2017LearningTA . In Zhou2017NeuralQG, the original SQuAD development set is evenly divided into dev and test sets, while Du2017LearningTA treats SQuAD development set as its development set and splits original SQuAD training set into a training set and a test set. We also filter out questions which do not have any overlapped non-stop words with the corresponding sentences and perform some preprocessing steps, such as tokenization and sentence splitting. The data statistics are given in Table TABREF27. We evaluate with all commonly-used metrics in question generation BIBREF13: BLEU-1 (B1), BLEU-2 (B2), BLEU-3 (B3), BLEU-4 (B4) BIBREF17, METEOR (MET) BIBREF18 and ROUGE-L (R-L) BIBREF19. We use the evaluation script released by Chen2015MicrosoftCC. Experimental Setting ::: Baseline Models We compare with the following models. [leftmargin=*] s2s BIBREF13 proposes an attention-based sequence-to-sequence neural network for question generation. NQG++ BIBREF9 takes the answer position feature and linguistic features into consideration and equips the Seq2Seq model with copy mechanism. M2S+cp BIBREF14 conducts multi-perspective matching between the answer and the sentence to derive an answer-aware sentence representation for question generation. s2s+MP+GSA BIBREF8 introduces a gated self-attention into the encoder and a maxout pointer mechanism into the decoder. We report their sentence-level results for a fair comparison. Hybrid BIBREF15 is a hybrid model which considers the answer embedding for the question word generation and the position of context words for modeling the relative distance between the context words and the answer. ASs2s BIBREF16 replaces the answer in the sentence with a special token to avoid its appearance in the generated questions. Experimental Setting ::: Implementation Details We take the most frequent 20k words as our vocabulary and use the GloVe word embeddings BIBREF20 for initialization. The embedding dimensions for POS, NER, answer position are set to 20. We use two-layer LSTMs in both encoder and decoder, and the LSTMs hidden unit size is set to 600. We use dropout BIBREF21 with the probability $p=0.3$. All trainable parameters, except word embeddings, are randomly initialized with the Xavier uniform in $(-0.1, 0.1)$ BIBREF22. For optimization in the training, we use SGD as the optimizer with a minibatch size of 64 and an initial learning rate of 1.0. We train the model for 15 epochs and start halving the learning rate after the 8th epoch. We set the gradient norm upper bound to 3 during the training. We adopt the teacher-forcing for the training. In the testing, we select the model with the lowest perplexity and beam search with size 3 is employed for generating questions. All hyper-parameters and models are selected on the validation dataset. Results and Analysis ::: Main Results Table TABREF30 shows automatic evaluation results for our model and baselines (copied from their papers). Our proposed model which combines structured answer-relevant relations and unstructured sentences achieves significant improvements over proximity-based answer-aware models BIBREF9, BIBREF15 on both dataset splits. Presumably, our structured answer-relevant relation is a generalization of the context explored by the proximity-based methods because they can only capture short dependencies around answer fragments while our extractions can capture both short and long dependencies given the answer fragments. Moreover, our proposed framework is a general one to jointly leverage structured relations and unstructured sentences. All compared baseline models which only consider unstructured sentences can be further enhanced under our framework. Recall that existing proximity-based answer-aware models perform poorly when the distance between the answer fragment and other non-stop sentence words that also appear in the ground truth question is large (Table TABREF2). Here we investigate whether our proposed model using the structured answer-relevant relations can alleviate this issue or not, by conducting experiments for our model under the same setting as in Table TABREF2. The broken-down performances by different relative distances are shown in Table TABREF40. We find that our proposed model outperforms Hybrid (our re-implemented version for this experiment) on all ranges of relative distances, which shows that the structured answer-relevant relations can capture both short and long term answer-relevant dependencies of the answer in sentences. Furthermore, comparing the performance difference between Hybrid and our model, we find the improvements become more significant when the distance increases from “$0\sim 10$” to “$>10$”. One reason is that our model can extract relations with distant dependencies to the answer, which greatly helps our model ignore the extraneous information. Proximity-based answer-aware models may overly emphasize the neighboring words of answers and become less effective as the useful context becomes further away from the answer in the complex sentences. In fact, the breakdown intervals in Table TABREF40 naturally bound its sentence length, say for “$>10$”, the sentences in this group must be longer than 10. Thus, the length variances in these two intervals could be significant. To further validate whether our model can extract long term dependency words. We rerun the analysis of Table TABREF40 only for long sentences (length $>$ 20) of each interval. The improvement percentages over Hybrid are shown in Table TABREF40, which become more significant when the distance increases from “$0\sim 10$” to “$>10$”. Results and Analysis ::: Case Study Figure FIGREF42 provides example questions generated by crowd-workers (ground truth questions), the baseline Hybrid BIBREF15, and our model. In the first case, there are two subsequences in the input and the answer has no relation with the second subsequence. However, we see that the baseline model prediction copies irrelevant words “The New York Times” while our model can avoid using the extraneous subsequence “The New York Times noted ...” with the help of the structured answer-relevant relation. Compared with the ground truth question, our model cannot capture the cross-sentence information like “her fifth album”, where the techniques in paragraph-level QG models BIBREF8 may help. In the second case, as discussed in Section SECREF1, this sentence contains a few facts and some facts intertwine. We find that our model can capture distant answer-relevant dependencies such as “mean temperature” while the proximity-based baseline model wrongly takes neighboring words of the answer like “coldest” in the generated question. Results and Analysis ::: Diverse Question Generation Another interesting observation is that for the same answer-sentence pair, our model can generate diverse questions by taking different answer-relevant relations as input. Such capability improves the interpretability of our model because the model is given not only what to be asked (i.e., the answer) but also the related fact (i.e., the answer-relevant relation) to be covered in the question. In contrast, proximity-based answer-aware models can only generate one question given the sentence-answer pair regardless of how many answer-relevant relations in the sentence. We think such capability can also validate our motivation: questions should be generated according to the answer-aware relations instead of neighboring words of answer fragments. Figure FIGREF45 show two examples of diverse question generation. In the first case, the answer fragment `Hugh L. Dryden' is the appositive to `NASA Deputy Administrator' but the subject to the following tokens `announced the Apollo program ...'. Our framework can extract these two answer-relevant relations, and by feeding them to our model separately, we can receive two questions asking different relations with regard to the answer. Related Work The topic of question generation, initially motivated for educational purposes, is tackled by designing many complex rules for specific question types BIBREF4, BIBREF23. Heilman2010GoodQS improve rule-based question generation by introducing a statistical ranking model. First, they remove extraneous information in the sentence to transform it into a simpler one, which can be transformed easily into a succinct question with predefined sets of general rules. Then they adopt an overgenerate-and-rank approach to select the best candidate considering several features. With the rise of dominant neural sequence-to-sequence learning models BIBREF5, Du2017LearningTA frame question generation as a sequence-to-sequence learning problem. Compared with rule-based approaches, neural models BIBREF24 can generate more fluent and grammatical questions. However, question generation is a one-to-many sequence generation problem, i.e., several aspects can be asked given a sentence, which confuses the model during train and prevents concrete automatic evaluation. To tackle this issue, Zhou2017NeuralQG propose the answer-aware question generation setting which assumes the answer is already known and acts as a contiguous span inside the input sentence. They adopt a BIO tagging scheme to incorporate the answer position information as learned embedding features in Seq2Seq learning. Song2018LeveragingCI explicitly model the information between answer and sentence with a multi-perspective matching model. Kim2019ImprovingNQ also focus on the answer information and proposed an answer-separated Seq2Seq model by masking the answer with special tokens. All answer-aware neural models treat question generation as a one-to-one mapping problem, but existing models perform poorly for sentences with a complex structure (as shown in Table TABREF2). Our work is inspired by the process of extraneous information removing in BIBREF0, BIBREF25. Different from Heilman2010GoodQS which directly use the simplified sentence for generation and cao2018faithful which only consider aggregate two sources of information via gated attention in summarization, we propose to combine the structured answer-relevant relation and the original sentence. Factoid question generation from structured text is initially investigated by Serban2016GeneratingFQ, but our focus here is leveraging structured inputs to help question generation over unstructured sentences. Our proposed model can take advantage of unstructured sentences and structured answer-relevant relations to maintain informativeness and faithfulness of generated questions. The proposed model can also be generalized in other conditional sequence generation tasks which require multiple sources of inputs, e.g., distractor generation for multiple choice questions BIBREF26. Conclusions and Future Work In this paper, we propose a question generation system which combines unstructured sentences and structured answer-relevant relations for generation. The unstructured sentences maintain the informativeness of generated questions while structured answer-relevant relations keep the faithfulness of questions. Extensive experiments demonstrate that our proposed model achieves state-of-the-art performance across several metrics. Furthermore, our model can generate diverse questions with different structured answer-relevant relations. For future work, there are some interesting dimensions to explore, such as difficulty levels BIBREF27, paragraph-level information BIBREF8 and conversational question generation BIBREF28. Acknowledgments This work is supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14208815 and No. CUHK 14210717 of the General Research Fund). We would like to thank the anonymous reviewers for their comments. We would also like to thank Department of Computer Science and Engineering, The Chinese University of Hong Kong for the conference grant support. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: On what datasets are experiments performed? Only give me the answer and do not output any other words.
Answer:
[ "SQuAD", "SQuAD" ]
276
5,483
single_doc_qa
929fcc350e514a2496a578127a3dcd3b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises. Biography Before her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first "e-book" mid-July. Margaret died on the 10th of August 2022 in Cleveland, Queensland. Bibliography Single Novels King Country (1970) Blaze of Silk (1970) The Time of the Jacaranda (1970) Bauhinia Junction (1971) Man from Bahl Bahla (1971) Summer Magic (1971) Return to Belle Amber (1971) Ring of Jade (1972) Copper Moon (1972) Rainbow Bird (1972) Man Like Daintree (1972) Noonfire (1972) Storm Over Mandargi (1973) Wind River (1973) Love Theme (1974) McCabe's Kingdom (1974) Sweet Sundown (1974) Reeds of Honey (1975) Storm Flower (1975) Lesson in Loving (1975) Flight into Yesterday (1976) Red Cliffs of Malpara (1976) Man on Half-moon (1976) Swan's Reach (1976) Mutiny in Paradise (1977) One Way Ticket (1977) Portrait of Jaime (1977) Black Ingo (1977) Awakening Flame (1978) Wild Swan (1978) Ring of Fire (1978) Wake the Sleeping Tiger (1978) Valley of the Moon (1979) White Magnolia (1979) Winds of Heaven (1979) Blue Lotus (1979) Butterfly and the Baron (1979) Golden Puma (1980) Temple of Fire (1980) Lord of the High Valley (1980) Flamingo Park (1980) North of Capricorn (1981) Season for Change (1981) Shadow Dance (1981) McIvor Affair (1981) Home to Morning Star (1981) Broken Rhapsody (1982) The Silver Veil (1982) Spellbound (1982) Hunter's Moon (1982) Girl at Cobalt Creek (1983) No Alternative (1983) House of Memories (1983) Almost a Stranger (1984) A place called Rambulara (1984) Fallen Idol (1984) Hunt the Sun (1985) Eagle's Ridge (1985) The Tiger's Cage (1986) Innocent in Eden (1986) Diamond Valley (1986) Morning Glory (1988) Devil Moon (1988) Mowana Magic (1988) Hungry Heart (1988) Rise of an Eagle (1988) One Fateful Summer (1993) The Carradine Brand (1994) Holding on to Alex (1997) The Australian Heiress (1997) Claiming His Child (1999) The Cattleman's Bride (2000) The Cattle Baron (2001) The Husbands of the Outback (2001) Secrets of the Outback (2002) With This Ring (2003) Innocent Mistress (2004) Cattle Rancher, Convenient Wife (2007) Outback Marriages (2007) Promoted: Nanny to Wife (2007) Cattle Rancher, Secret Son (2007) Genni's Dilemma (2008) Bride At Briar Ridge (2009) Outback Heiress, Surprise Proposal (2009) Cattle Baron, Nanny Needed (2009) Legends of the Outback Series Mail Order Marriage (1999) The Bridesmaid's Wedding (2000) The English Bride (2000) A Wife at Kimbara (2000) Koomera Crossing Series Sarah's Baby (2003) Runaway Wife (2003) Outback Bridegroom (2003) Outback Surrender (2003) Home to Eden (2004) McIvor Sisters Series The Outback Engagement (2005) Marriage at Murraree (2005) Men Of The Outback Series The Cattleman (2006) The Cattle Baron's Bride (2006) Her Outback Protector (2006) The Horseman (2006) Outback Marriages Series Outback Man Seeks Wife (2007) Cattle Rancher, Convenient Wife (2007) Barons of the Outback Series Multi-Author Wedding At Wangaree Valley (2008) Bride At Briar's Ridge (2008) Family Ties Multi-Author Once Burned (1995) Hitched! Multi-Author A Faulkner Possession (1996) Simply the Best Multi-Author Georgia and the Tycoon (1997) The Big Event Multi-Author Beresford's Bride (1998) Guardian Angels Multi-Author Gabriel's Mission (1998) Australians Series Multi-Author 7. Her Outback Man (1998) 17. Master of Maramba (2001) 19. Outback Fire (2001) 22. Mistaken Mistress (2002) 24. Outback Angel (2002) 33. The Australian Tycoon's Proposal (2004) 35. His Heiress Wife (2004) Marrying the Boss Series Multi-Author Boardroom Proposal (1999) Contract Brides Series Multi-Author Strategy for Marriage (2002) Everlasting Love Series Multi-Author Hidden Legacy (2008) Diamond Brides Series Multi-Author The Australian's Society Bride (2008) Collections Summer Magic / Ring of Jade / Noonfire (1981) Wife at Kimbara / Bridesmaid's Wedding (2005) Omnibus in Collaboration Pretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm) Dear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham) The Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid) The Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton) Winds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton) Moorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter) The Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe) Head of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith) Heart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf) One Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks) Marry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister) Husbands on Horseback (1996) (with Diana Palmer) Wedlocked (1999) (with Day Leclaire and Anne McAllister) Mistletoe Magic (1999) (with Betty Neels and Rebecca Winters) The Australians (2000) (with Helen Bianchin and Miranda Lee) Weddings Down Under (2001) (with Helen Bianchin and Jessica Hart) Outback Husbands (2002) (with Marion Lennox) The Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann) Australian Nights (2003) (with Miranda Lee) Outback Weddings (2003) (with Barbara Hannay) Australian Playboys (2003) (with Helen Bianchin and Marion Lennox) Australian Tycoons (2004) (with Emma Darcy and Marion Lennox) A Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe) White Wedding (2004) (with Judy Christenberry and Jessica Steele) A Christmas Engagement (2004) (with Sara Craven and Jessica Matthews) A Very Special Mother's Day (2005) (with Anne Herries) All I Want for Christmas... (2005) (with Betty Neels and Jessica Steele) The Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan) Outback Desire (2006) (with Emma Darcy and Carol Marinelli) To Mum, with Love (2006) (with Rebecca Winters) Australian Heroes (2007) (with Marion Lennox and Fiona McArthur) Tall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin) The Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer) Island Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver) Australian Billionaires (2009) (with Jennie Adams and Amy Andrews) Cattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas) External links Margaret Way at Harlequin Enterprises Ltd Australian romantic fiction writers Australian women novelists Living people Year of birth missing (living people) Women romantic fiction writers Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What genre did Margaret Way write in? Only give me the answer and do not output any other words.
Answer:
[ "Romance novels and women's fiction." ]
276
2,167
single_doc_qa
1cc7a09c956849eba1f1a7245e2e1816
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints. BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions: Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work. Data For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token. Word Embeddings To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 . Translating Lexicons Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 . All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 . For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05. Translating Data The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets. Algorithms Used Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish. For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set. Semi-supervised Learning One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set. First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet. Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded. This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks. Ensembling To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used. Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 . Results and Discussion Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks. However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging. Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors. Error Analysis Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors. Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand. The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix. Conclusion To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What were the scores of their system? Only give me the answer and do not output any other words.
Answer:
[ "column Ens Test in Table TABREF19" ]
276
3,036
single_doc_qa
698a409df607423d8819d2953b0ef0d4
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Nuclear Liquid-Gas Transition in the Strong Coupling Regime of Lattice QCD Publish Date: 28 Mar 2023 Author List: J Kim (from Institute for Advanced Simulation (IAS-4), Forschungszentrum Jülich), P Pattanaik (from Fakultät für Physik, Bielefeld University), W Unger (from Fakultät für Physik, Bielefeld University) Figure FIG. 1.Typical 2-dimension configuration at β = 1.0, at non-zero quark mass, temperature, chemical potential.The black dots are monomers, the blue lines are dimers, the red arrows are baryon loop segments (or triplets g b + f b = ±3 if adjacent to a non-trivial plaquette), and the green squares are plaquette occupations ±1.The actual configurations are 3+1-dimensional. FIG.2.Chiral susceptibility on a 2 4 volume for various quark masses, as a function of the bare anisotropy γ (with aT = γ 2 /2), analytic results from enumeration compared to numerical data from simulations via the worm algorithm. FIG.3.Various observables in the µB-T plane on a 2 4 volume at amq = 0.1.The back-bending of the first order transition at temperatures below aT = 0.5 in all observables is an artifact of the small volume, and vanishes in the thermodynamic limit.The temperature aT = 1/2 corresponds to the isotropic lattice here. FIG. 4. The chiral condensate (left) and the baryon density (right) for quark mass m = 1.5 as a function of the chemical potential and for various temperatures. FIG. 7. ∆f at amq = 0.2 as a function of chemical potential and β the on a 6 3 × 4 lattice FIG. 8. Baryon mass from ∆E as a function of the quark mass amq, and contributions from different dual variables: monomers, dimers and baryon segments. FIG. 9. Baryon density for volume 4 3 × 8 in the full µB − mq plane, illustrating the strong quark mass dependence of the onset to nuclear matter. FIG. 10.Baryonic observables on various volumes in the first order region amq = 1.5.Vertical bands indicate the mean and error of the nuclear transition. FIG. 12. Left: Extrapolation of the pseudo-critical values of µB for the various volumes into the thermodynamic limit.Right: Critical baryon chemical potential for different quark masses.The first order transition region is shown in blue, the crossover region is shown in red and the range for critical end point is marked in black. FIG. 17. Nuclear interaction scaled with baryon mass.As the quark mass increases, it tends to zero. FIG. 18. Critical baryon chemical potential and baryon mass from different approaches. Parameters for the Monte Carlo runs to determine the nuclear transition at strong coupling, with statistics after thermalization. abstract The nuclear liquid-gas transition from a gas of hadrons to a nuclear phase cannot be determined numerically from conventional lattice QCD due to the severe sign problem at large values of the baryon chemical potential. In the strong coupling regime of lattice QCD with staggered quarks, the dual formulation is suitable to address the nuclear liquid gas transition. We determine this first order transition at low temperatures and as a function of the quark mass and the inverse gauge coupling β. We also determine the baryon mass and discuss the nuclear interactions as a function of the quark mass, and compare to mean field results. It is known from experiments that at low temperatures, there is a phase transition between dilute hadron gas and dense nuclear matter as the baryon chemical potential increases. This transition is of first order and terminates at about T c = 16 MeV in a critical end point. The value of the chemical potential µ 1st B at zero temperature is given roughly by the baryon mass m B , where the difference of µ 1st B −m B is due to nuclear interactions. For a review on nuclear interactions see . As the nuclear force between baryons to form nuclear matter is due to the residual strong interactions between quarks and gluons, it should be accurately described by QCD. We choose to study the nuclear transition and nuclear interaction via lattice QCD , with its Lagrangian being a function of the quark mass and the inverse gauge coupling. In order to understand the nature of the transition, it is helpful to study its dependence on these parameters. However, at finite baryon density, lattice QCD has the infamous sign problem which does not allow us to perform direct Monte Carlo simulations on the lattice. Various methods have been proposed to overcome the numerical sign problem, but they are either limited to µ B /T 3 or can not yet address full QCD in 3+1 dimensions in the whole µ B − T plane , in particular the nuclear transition is out of reach. An alternative method is to study lattice QCD via the strong coupling expansion. There are two established effective theories for lattice QCD based on this: (1) the 3-dim. effective theory for Wilson fermions in terms of Polyakov loops, arising from a joint strong coupling and hopping parameter expansion , the dual representation for staggered fermions in 3+1 dimensions, with dual degrees of freedom describing mesons and baryons. Both effective theories have their limitations: is limited to rather heavy quarks (but is valid for large values of β) whereas ( ) is limited to the strong coupling regime β 1 (but is valid for any quark mass). We study lattice QCD in the dual formulation, both at infinite bare gauge coupling, β = 0, and at leading order of the strong coupling expansion in the regime β < 1, which is far from the continuum limit. But since strong coupling lattice QCD shares important features with QCD, such as confinement, and chiral symmetry breaking and its restoration at the chiral transition temperature, and a nuclear liquid gas transition, we may get insights into the mechanisms, in particular as the dual variables give more information in terms of its world lines, as compared to the usual fermion determinant that depends on the gauge variables. To establish a region of overlap of both effective theories, we have chosen to perform the Monte Carlo simulations in the dual formulation extending to rather large quark masses. This paper is organized as follows: in the first part we explain the dual formulation in the strong coupling regime, in the second part we provide analytic results based on exact enumeration and mean field theory, in the third part we explain the setup of our Monte Carlo simulations and present result on the m q -and β-dependence of the nuclear transition. Since the strong coupling regime does not have a well defined lattice spacing, we also determine the baryon mass am B to set the parameters of the grand-canonical partition function, aT and aµ B , in units of am B . We conclude by discussing the resulting nuclear interactions, and compare our findings with other results. Staggered action of strong coupling QCD and its dual representation In the strong coupling regime, the gauge integration is performed first, followed by the Grassmann integration to obtain a dual formulation. This was pioneered for the strong coupling limit in and has been extended by one of us to include gauge corrections . The sign problem is mild in the strong coupling limit and still under control for β < 1, where we can apply sign reweighting. The dual degrees of freedom are color-singlet mesons and baryons, which are point-like in the strong coupling limit, and become extended about a lattice spacing by incorporating leading order gauge corrections. The partition function of lattice QCD is given by where DU is the Haar measure, U ∈ SU(3) are the gauge fields on the lattice links (x, μ) and { χx , χ x } are the unrooted staggered fermions at the lattice sites x. The gauge action S G [U] is given by the Wilson plaquette action and the staggered fermion action S F [ χ, χ, U] is: where the gauge action depends on the inverse gauge coupling β = 2Nc g 2 and the fermion action depends on the quark chemical potential aµ q which favors quarks in the positive temporal direction, and the bare quark mass am q . First we consider the strong coupling limit where the inverse gauge coupling β=0 and hence the gauge action S G [U] drops out from the partition function in this limit. The gauge integration is over terms depending only on the individual links (x, μ) so the partition function factorizes into a product of one-link integrals and we can write it as: with z(x, μ) the one-link gauge integral that can be eval-uated from invariant integration, as discussed in , where we write the one-link integral in terms of new hadronic variables: Only terms of the form (M (x)M (y)) k x, μ (with k x,μ called dimers which count the number of meson hoppings) and B(y)B(x) and B(x)B(y) (called baryon links) are present in the solution of the one-link integral. The sites x and y = x + μ are adjacent lattice sites. It remains to perform the Grassmann integral of the fermion fields χ, χ. This requires to expand the exponential containing the quark mass in Eq. (4) (left), which results in the terms (2am q M (x)) nx (with n x called monomers). To obtain non-vanishing results, at every site, the 2N c Grassman variables χ x,i and χx,i have to appear exactly once, resulting in the Grassmann constraint (GC): where n x is the number of monomers, k x,μ is the number of dimers and the baryons form self-avoiding loops x,μ , which due to the constraint cannot coexist with monomers or dimers. With this, we obtain an exact rewriting of the partition function Eq. ( ) for N c = 3, in terms of integer-valued dual degrees of freedom {n, k, }: where the sum over valid configurations has to respect the constraint (GC). The first term in the partition function is the contribution from dimers and the second term is the contribution from monomers. The weight factor w( ) for each baryon loop depends on the baryon chemical potential µ B = 3µ q and induces a sign factor σ( ) which depends on the geometry of : Here, ω is the winding number of the loop . The total sign factor σ( ) ∈ {±1} is explicitly calculated for every configuration. We apply sign reweighting as the dual formulation has a mild sign problem: baryons are non-relativistic and usually have loop geometries that have a positive signs. The dual partition function of the strong coupling limit is simulated with the worm algorithm (see Section III A) and the sign problem is essentially solved in this limit. Extension to finite β The leading order gauge corrections O(β) to the strong coupling limit are obtained by expanding the Wilson gauge action Eq. ( ) before integrating out the gauge links. A formal expression is obtained by changing the order of integration (first gauge links, then Grassmann-valued fermions) within the QCD partition function: With this the O (β) partition function is The challenge in computing Z (1) is to address the SU(N c ) integrals that receive contributions from the elementary plaquette U P . Link integration no longer factorizes, however the tr[U P ] can be decomposed before integration: Integrals of the type J ij with two open color indices -as compared to link integration at strong coupling -have been derived from generating functions for either J = 0 or for G = U(N c ) . The SU(3) result was discussed in , in terms of the dual variables, neglecting rotation and reflection symmetries, there are 19 distinct diagrams to be considered. The resulting partition function, valid to O(β), is with q P ∈ {0, ±1}, and the site weights w x → ŵx , bond weights w b → ŵb and baryon loop weights w → ŵ receive modifications compared to the strong coupling limit Eq. ( ) for sites and bonds adjacent to an excited plaquette q P = 1. The weights are given in , and are rederived for any gauge group in . The configurations {n, k, , q p } must satisfy at each site x the constraint inherited from Grassmann integration: which is the modified version of Eq. ( ) with q x = 1 if located at the corner of an excited plaquette q p = 0, otherwise q x = 0. A more general expression that we obtained via group theory and is valid to higher orders of the strong coupling expansion is discussed in terms of tensor networks . A typical 2-dimensional configuration that arises at β = 1 in the Monte Carlo simulations is given in Fig. . Note that if a baryon loop enters a non-trivial plaquette, one quark is separated from the two other quarks, resulting in the baryon being extended object, rather being point-like in the strong coupling limit. The O(β) partition function has been used in the chiral limit to study the full µ B − T plane via reweighting from the strong coupling ensemble. Whereas the second order chiral transition for small values of the aµ B decreased up to the tri-critical point, the first order nuclear transition was invariant: aµ 1st B 1.78(1) at zero temperature has no β-dependence. For the ratio T (µ B = 0)/µ 1st B (T 0) we found the values 0.787 for β = 0 and 0.529 β = 1, which should be compared to T c / 0.165 for full QCD . However, since reweighting cannot be fully trusted across a first order boundary, direct simulations at nonzero β are necessary. The Monte Carlo technique to update plaquette variables is discussed in Section III A. In this section, we provide analytic results from exact enumeration for small volumes, and mean field results based on the 1/d expansion, valid in the thermodynamic limit. The main purpose is to compare our Monte Carlo results to these analytic predictions. Exact enumeration To establish that our Monte Carlo simulations indeed sample the partition functions Eq. ( ) and Eq. ( ), we have obtained analytic results on a 2 4 volume at strong coupling, and at finite beta in two dimensions on a 4 × 4 volume, comparing O (β) and O β 2 truncations. Our strategy to obtain an exact enumeration of the partition function Z is to enumerate plaquette configurations first, then fixing the fermion fluxes which together with the gauge fluxes that are induced by the plaquettes form a singlet, a triplet or anti-triplet, i.e. on a given bond b, g b + f b ∈ {−3, 0, 3}, and last we perform the monomerdimer enumeration on the available sites not saturated by fermions yet by a depth-first algorithm . At strong coupling, with no plaquettes, g b = 0 and f b are baryonic fluxes. All observables that can be written in terms of derivatives of log(z), such as the baryon density, the chiral condensate, the energy density, and also the average sign, are shown in Fig. Expectations from mean field theory Another analytical method to study strong coupling lattice QCD is the mean field approach, where the partition function is expanded in 1 d (d is the spatial dimension) and then a Hubbard-Stratonovich transformation performed . After this procedure, the free energy is a function of temperature T , the chiral condensate σ and chemical potential µ B : here E[m] is one-dimensional quark excitation energy which is a function of the quark mass m = am q . For N c = 3 and d = 3 we determined the minimum of the free energy with respect to the chiral condensate. This gives us the equilibrium chiral condensate as a function of (T, m, µ B ). The chiral condensate and the baryon density as a function of the baryon chemical potential in lattice units aµ B and for various temperatures at quark mass m = 1.5 is shown in Fig. . We have determined the critical temperature to be aT c = 0.23 , which is characterized by an infinite slope of the chiral condensate. For lower temperatures, there is a clear discontinuity of the chiral con-densate, separating the low density phase from the high density phase. For temperatures above and in the vicinity of aT c the chiral condensate and baryon density has no discontinuity but rapidly changes, corresponding to a crossover transition. With this method, the phase diagram is plotted for different quark masses in Fig. . The second order phase transition in the chiral limit is plotted in solid blue line, the dotted lines show the first order phase transition for different quark masses and the solid red line indicates the critical end point for the different quark masses. Mean field theory also gives an expression for the pion mass am π and the baryon mass am B : The mean field baryon mass for N c = 3, d = 3 is also plotted in red in Fig. . Whereas the baryon mass is around N c in the chiral limit (am B 3.12 for N c = 3), it approximately doubles at m = 3.5 (am B 6.28) which corresponds to the pion mass am π = 4.45, i.e. m π /m B = 0.708. Hence, at around bare mass m = 3.5, the valence quark mass of the baryon corresponds roughly to 1/3 of the chiral limit value of the baryon mass. The first Monte Carlo simulations that could extend in the µ B − T plane was the MDP algorithm , but it required the introduction of the worm algorithm to make substantial progress. First studies of the worm algorithm applied to the strong coupling limit QCD (with gauge group U(3)) are , and for gauge group SU . Monte Carlo simulations to extend the worm to incorporate leading order corrections were first proposed in . We will shortly review the setup of or Monte Carlo strategy for the nuclear transition, with an emphasis on the challenges to address large quark masses. Strong Coupling Without any further resummation, there is a mild sign problem in the dual formulation of lattice QCD in the strong coupling limit. When the average sign σ is not too small (close to zero), it implies that most of the configurations have a positive weight thus allowing us to perform sign reweighting strategies. In Fig. , ∆f is plotted as a function of the baryon chemical potential and the quark masses. It is seen that ∆f is close to zero for most cases except near the critical chemical potential and for small quark masses, but never exceeds 5 × 10 −4 . Hence sign reweighting can be performed in the full parameter space. The result that the sign problem becomes even milder when increasing the mass is related to the fact that larger critical chemical potentials result in a larger fraction of static baryons (spatial baryon hoppings become rare). FIG. . ∆F at strong coupling as a function of chemical potential and quark mass on a 6 3 × 8. The sign problem becomes milder as the quark mass increases. Finite β All runs at finite β have been obtained for N τ = 4, which corresponds to a moderately low temperature aT = 0.25 compared to the value of the chiral transition aT 1.54. Those simulations were too expensive to attempt N τ = 8 runs, in particular as a higher statistics was required. The spatial volumes are 4 3 , 6 3 and 8 3 . For β values are from 0.0 to 1.0 with step size 0.1, and for am q values from 0.00 to 1.00 with step size 0.01. The values of aµ were chosen close to the nuclear transition, the scanning range is shifted to large values as am q increases. At small quark masses the scanning range is from aµ = 0.4 to 1.0 and for the large quark masses, it is from 0.6 to 1.2 with step size 0.01. The statistics used for are 15 × 10 4 measurements and between measurement, 40 × N 3 s worm updates. Residual sign problem Although it is possible to resum the sign problem at strong coupling with a resummation of baryon and pion world lines, this is not possible when including gauge corrections. In order to compare both sign problems, we kept the original dual formulation to monitor the severity of the sign problem. This is done via the relation between the average sign σ and the difference of the free energy density ∆f between the full ensemble f and of the sign-quenched ensemble f || . Nuclear interactions We have found that aµ 1st B is very different from the baryon mass. This must be due to strong attractive interactions of nucleons. In contrast to continuum physics, in the strong coupling limit there is no pion exchange due to the Grassmann constraint. Instead, nucleons are point like and hard core repulsive. However, the pion bath, which is modified by the presence of static baryons, results in an attractive interaction. In , this has been analyzed in the chiral limit using the snake algorithm, and it has been found that the attractive force is of entropic origin. Here, we do not quantify the nuclear interaction via the nuclear potential, but via the difference between critical baryon chemical potential and baryon mass, in units baryon mass, as shown in Fig. , given the am B as measured in Section III C. This compares better to the 3dim. effective theory. The nuclear interaction is maximal and more than 40% in the chiral limit, which is related to pions being massless: the modification of the pion bath is maximal. We clearly find that the nuclear interaction decreases drastically and almost linearly until it almost approaches zero at about am q = 2.0, corresponding to a pion mass am π = 3.36, see Section II B. The large error bars for larger quark masses, that are due to the subtraction of almost same magnitudes, makes it difficult to extract a non-zero nuclear interaction at the largest quark masses. In this work, we have determined the baryon mass and the nuclear transition via Monte Carlo: the worm algorithm based on the dual formulation, at finite β equipped with additional updates. All those numerical results and various analytic expressions are summarized in Fig. . We find that as the quark mass becomes large, spatial mesons hoppings (i.e. spatial dimers) become rare, which makes this 3+1-dimensional system closer to 1dim. QCD . Also, both the baryon mass and the baryon chemical potential obtained in our dual representation, i.e. for staggered fermions, approaches the baryon mass of the 3-dim. effective theory which is based on Wilson fermions. Another comparison that summarizes the validity of the mean field approach discussed in Section II B is shown in Fig. . It is evident that mean field theory has strong deviations for small quark masses, but this discrepancy becomes smaller for larger quark masses. The extension of the study of the nuclear transition to finite inverse gauge coupling β is summarized in Fig. , which shows the β-dependence of aµ c B for various quark masses. For all quark masses ranging from am q = 0 to am q = 1.0, there is only a very weak β-dependence, confirming the expectation from mean field theory . This works was restricted to isotropic lattices ξ = a/a t = 1, i.e. we performed simulations at fixed temperature. Non-isotropic lattices are necessary to vary the temperature at fixed values of β. This requires to include two bare anisotropies, γ for the fermionic action and γ G for the gauge action. Finite β has only been studied by us in the chiral limit . Clearly, it is interesting to study the location of the nuclear critical point also including higher order gauge corrections and at finite quark mass. Simulations including O(β 2 ) are under preparation. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the main focus of the research paper? Only give me the answer and do not output any other words.
Answer:
[ "Nuclear liquid-gas transition in lattice QCD." ]
276
5,259
single_doc_qa
4e9edee2f89344c09909ba0b5108728e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Event detection on microblogging platforms such as Twitter aims to detect events preemptively. A main task in event detection is detecting events of predetermined types BIBREF0, such as concerts or controversial events based on microposts matching specific event descriptions. This task has extensive applications ranging from cyber security BIBREF1, BIBREF2 to political elections BIBREF3 or public health BIBREF4, BIBREF5. Due to the high ambiguity and inconsistency of the terms used in microposts, event detection is generally performed though statistical machine learning models, which require a labeled dataset for model training. Data labeling is, however, a long, laborious, and usually costly process. For the case of micropost classification, though positive labels can be collected (e.g., using specific hashtags, or event-related date-time information), there is no straightforward way to generate negative labels useful for model training. To tackle this lack of negative labels and the significant manual efforts in data labeling, BIBREF1 (BIBREF1, BIBREF3) introduced a weak supervision based learning approach, which uses only positively labeled data, accompanied by unlabeled examples by filtering microposts that contain a certain keyword indicative of the event type under consideration (e.g., `hack' for cyber security). Another key technique in this context is expectation regularization BIBREF6, BIBREF7, BIBREF1. Here, the estimated proportion of relevant microposts in an unlabeled dataset containing a keyword is given as a keyword-specific expectation. This expectation is used in the regularization term of the model's objective function to constrain the posterior distribution of the model predictions. By doing so, the model is trained with an expectation on its prediction for microposts that contain the keyword. Such a method, however, suffers from two key problems: Due to the unpredictability of event occurrences and the constantly changing dynamics of users' posting frequency BIBREF8, estimating the expectation associated with a keyword is a challenging task, even for domain experts; The performance of the event detection model is constrained by the informativeness of the keyword used for model training. As of now, we lack a principled method for discovering new keywords and improve the model performance. To address the above issues, we advocate a human-AI loop approach for discovering informative keywords and estimating their expectations reliably. Our approach iteratively leverages 1) crowd workers for estimating keyword-specific expectations, and 2) the disagreement between the model and the crowd for discovering new informative keywords. More specifically, at each iteration after we obtain a keyword-specific expectation from the crowd, we train the model using expectation regularization and select those keyword-related microposts for which the model's prediction disagrees the most with the crowd's expectation; such microposts are then presented to the crowd to identify new keywords that best explain the disagreement. By doing so, our approach identifies new keywords which convey more relevant information with respect to existing ones, thus effectively boosting model performance. By exploiting the disagreement between the model and the crowd, our approach can make efficient use of the crowd, which is of critical importance in a human-in-the-loop context BIBREF9, BIBREF10. An additional advantage of our approach is that by obtaining new keywords that improve model performance over time, we are able to gain insight into how the model learns for specific event detection tasks. Such an advantage is particularly useful for event detection using complex models, e.g., deep neural networks, which are intrinsically hard to understand BIBREF11, BIBREF12. An additional challenge in involving crowd workers is that their contributions are not fully reliable BIBREF13. In the crowdsourcing literature, this problem is usually tackled with probabilistic latent variable models BIBREF14, BIBREF15, BIBREF16, which are used to perform truth inference by aggregating a redundant set of crowd contributions. Our human-AI loop approach improves the inference of keyword expectation by aggregating contributions not only from the crowd but also from the model. This, however, comes with its own challenge as the model's predictions are further dependent on the results of expectation inference, which is used for model training. To address this problem, we introduce a unified probabilistic model that seamlessly integrates expectation inference and model training, thereby allowing the former to benefit from the latter while resolving the inter-dependency between the two. To the best of our knowledge, we are the first to propose a human-AI loop approach that iteratively improves machine learning models for event detection. In summary, our work makes the following key contributions: A novel human-AI loop approach for micropost event detection that jointly discovers informative keywords and estimates their expectation; A unified probabilistic model that infers keyword expectation and simultaneously performs model training; An extensive empirical evaluation of our approach on multiple real-world datasets demonstrating that our approach significantly improves the state of the art by an average of 24.3% AUC. The rest of this paper is organized as follows. First, we present our human-AI loop approach in Section SECREF2. Subsequently, we introduce our proposed probabilistic model in Section SECREF3. The experimental setup and results are presented in Section SECREF4. Finally, we briefly cover related work in Section SECREF5 before concluding our work in Section SECREF6. The Human-AI Loop Approach Given a set of labeled and unlabeled microposts, our goal is to extract informative keywords and estimate their expectations in order to train a machine learning model. To achieve this goal, our proposed human-AI loop approach comprises two crowdsourcing tasks, i.e., micropost classification followed by keyword discovery, and a unified probabilistic model for expectation inference and model training. Figure FIGREF6 presents an overview of our approach. Next, we describe our approach from a process-centric perspective. Following previous studies BIBREF1, BIBREF17, BIBREF2, we collect a set of unlabeled microposts $\mathcal {U}$ from a microblogging platform and post-filter, using an initial (set of) keyword(s), those microposts that are potentially relevant to an event category. Then, we collect a set of event-related microposts (i.e., positively labeled microposts) $\mathcal {L}$, post-filtering with a list of seed events. $\mathcal {U}$ and $\mathcal {L}$ are used together to train a discriminative model (e.g., a deep neural network) for classifying the relevance of microposts to an event. We denote the target model as $p_\theta (y|x)$, where $\theta $ is the model parameter to be learned and $y$ is the label of an arbitrary micropost, represented by a bag-of-words vector $x$. Our approach iterates several times $t=\lbrace 1, 2, \ldots \rbrace $ until the performance of the target model converges. Each iteration starts from the initial keyword(s) or the new keyword(s) discovered in the previous iteration. Given such a keyword, denoted by $w^{(t)}$, the iteration starts by sampling microposts containing the keyword from $\mathcal {U}$, followed by dynamically creating micropost classification tasks and publishing them on a crowdsourcing platform. Micropost Classification. The micropost classification task requires crowd workers to label the selected microposts into two classes: event-related and non event-related. In particular, workers are given instructions and examples to differentiate event-instance related microposts and general event-category related microposts. Consider, for example, the following microposts in the context of Cyber attack events, both containing the keyword `hack': Credit firm Equifax says 143m Americans' social security numbers exposed in hack This micropost describes an instance of a cyber attack event that the target model should identify. This is, therefore, an event-instance related micropost and should be considered as a positive example. Contrast this with the following example: Companies need to step their cyber security up This micropost, though related to cyber security in general, does not mention an instance of a cyber attack event, and is of no interest to us for event detection. This is an example of a general event-category related micropost and should be considered as a negative example. In this task, each selected micropost is labeled by multiple crowd workers. The annotations are passed to our probabilistic model for expectation inference and model training. Expectation Inference & Model Training. Our probabilistic model takes crowd-contributed labels and the model trained in the previous iteration as input. As output, it generates a keyword-specific expectation, denoted as $e^{(t)}$, and an improved version of the micropost classification model, denoted as $p_{\theta ^{(t)}}(y|x)$. The details of our probabilistic model are given in Section SECREF3. Keyword Discovery. The keyword discovery task aims at discovering a new keyword (or a set of keywords) that is most informative for model training with respect to existing keywords. To this end, we first apply the current model $p_{\theta ^{(t)}}(y|x)$ on the unlabeled microposts $\mathcal {U}$. For those that contain the keyword $w^{(t)}$, we calculate the disagreement between the model predictions and the keyword-specific expectation $e^{(t)}$: and select the ones with the highest disagreement for keyword discovery. These selected microposts are supposed to contain information that can explain the disagreement between the model prediction and keyword-specific expectation, and can thus provide information that is most different from the existing set of keywords for model training. For instance, our study shows that the expectation for the keyword `hack' is 0.20, which means only 20% of the initial set of microposts retrieved with the keyword are event-related. A micropost selected with the highest disagreement (Eq. DISPLAY_FORM7), whose likelihood of being event-related as predicted by the model is $99.9\%$, is shown as an example below: RT @xxx: Hong Kong securities brokers hit by cyber attacks, may face more: regulator #cyber #security #hacking https://t.co/rC1s9CB This micropost contains keywords that can better indicate the relevance to a cyber security event than the initial keyword `hack', e.g., `securities', `hit', and `attack'. Note that when the keyword-specific expectation $e^{(t)}$ in Equation DISPLAY_FORM7 is high, the selected microposts will be the ones that contain keywords indicating the irrelevance of the microposts to an event category. Such keywords are also useful for model training as they help improve the model's ability to identify irrelevant microposts. To identify new keywords in the selected microposts, we again leverage crowdsourcing, as humans are typically better than machines at providing specific explanations BIBREF18, BIBREF19. In the crowdsourcing task, workers are first asked to find those microposts where the model predictions are deemed correct. Then, from those microposts, workers are asked to find the keyword that best indicates the class of the microposts as predicted by the model. The keyword most frequently identified by the workers is then used as the initial keyword for the following iteration. In case multiple keywords are selected, e.g., the top-$N$ frequent ones, workers will be asked to perform $N$ micropost classification tasks for each keyword in the next iteration, and the model training will be performed on multiple keyword-specific expectations. Unified Probabilistic Model This section introduces our probabilistic model that infers keyword expectation and trains the target model simultaneously. We start by formalizing the problem and introducing our model, before describing the model learning method. Problem Formalization. We consider the problem at iteration $t$ where the corresponding keyword is $w^{(t)}$. In the current iteration, let $\mathcal {U}^{(t)} \subset \mathcal {U}$ denote the set of all microposts containing the keyword and $\mathcal {M}^{(t)}= \lbrace x_{m}\rbrace _{m=1}^M\subset \mathcal {U}^{(t)}$ be the randomly selected subset of $M$ microposts labeled by $N$ crowd workers $\mathcal {C} = \lbrace c_n\rbrace _{n=1}^N$. The annotations form a matrix $\mathbf {A}\in \mathbb {R}^{M\times N}$ where $\mathbf {A}_{mn}$ is the label for the micropost $x_m$ contributed by crowd worker $c_n$. Our goal is to infer the keyword-specific expectation $e^{(t)}$ and train the target model by learning the model parameter $\theta ^{(t)}$. An additional parameter of our probabilistic model is the reliability of crowd workers, which is essential when involving crowdsourcing. Following Dawid and Skene BIBREF14, BIBREF16, we represent the annotation reliability of worker $c_n$ by a latent confusion matrix $\pi ^{(n)}$, where the $rs$-th element $\pi _{rs}^{(n)}$ denotes the probability of $c_n$ labeling a micropost as class $r$ given the true class $s$. Unified Probabilistic Model ::: Expectation as Model Posterior First, we introduce an expectation regularization technique for the weakly supervised learning of the target model $p_{\theta ^{(t)}}(y|x)$. In this setting, the objective function of the target model is composed of two parts, corresponding to the labeled microposts $\mathcal {L}$ and the unlabeled ones $\mathcal {U}$. The former part aims at maximizing the likelihood of the labeled microposts: where we assume that $\theta $ is generated from a prior distribution (e.g., Laplacian or Gaussian) parameterized by $\sigma $. To leverage unlabeled data for model training, we make use of the expectations of existing keywords, i.e., {($w^{(1)}$, $e^{(1)}$), ..., ($w^{(t-1)}$, $e^{(t-1)}$), ($w^{(t)}$, $e^{(t)}$)} (Note that $e^{(t)}$ is inferred), as a regularization term to constrain model training. To do so, we first give the model's expectation for each keyword $w^{(k)}$ ($1\le k\le t$) as follows: which denotes the empirical expectation of the model’s posterior predictions on the unlabeled microposts $\mathcal {U}^{(k)}$ containing keyword $w^{(k)}$. Expectation regularization can then be formulated as the regularization of the distance between the Bernoulli distribution parameterized by the model's expectation and the expectation of the existing keyword: where $D_{KL}[\cdot \Vert \cdot ]$ denotes the KL-divergence between the Bernoulli distributions $Ber(e^{(k)})$ and $Ber(\mathbb {E}_{x\sim \mathcal {U}^{(k)}}(y))$, and $\lambda $ controls the strength of expectation regularization. Unified Probabilistic Model ::: Expectation as Class Prior To learn the keyword-specific expectation $e^{(t)}$ and the crowd worker reliability $\pi ^{(n)}$ ($1\le n\le N$), we model the likelihood of the crowd-contributed labels $\mathbf {A}$ as a function of these parameters. In this context, we view the expectation as the class prior, thus performing expectation inference as the learning of the class prior. By doing so, we connect expectation inference with model training. Specifically, we model the likelihood of an arbitrary crowd-contributed label $\mathbf {A}_{mn}$ as a mixture of multinomials where the prior is the keyword-specific expectation $e^{(t)}$: where $e_s^{(t)}$ is the probability of the ground truth label being $s$ given the keyword-specific expectation as the class prior; $K$ is the set of possible ground truth labels (binary in our context); and $r=\mathbf {A}_{mn}$ is the crowd-contributed label. Then, for an individual micropost $x_m$, the likelihood of crowd-contributed labels $\mathbf {A}_{m:}$ is given by: Therefore, the objective function for maximizing the likelihood of the entire annotation matrix $\mathbf {A}$ can be described as: Unified Probabilistic Model ::: Unified Probabilistic Model Integrating model training with expectation inference, the overall objective function of our proposed model is given by: Figure FIGREF18 depicts a graphical representation of our model, which combines the target model for training (on the left) with the generative model for crowd-contributed labels (on the right) through a keyword-specific expectation. Model Learning. Due to the unknown ground truth labels of crowd-annotated microposts ($y_m$ in Figure FIGREF18), we resort to expectation maximization for model learning. The learning algorithm iteratively takes two steps: the E-step and the M-step. The E-step infers the ground truth labels given the current model parameters. The M-step updates the model parameters, including the crowd reliability parameters $\pi ^{(n)}$ ($1\le n\le N$), the keyword-specific expectation $e^{(t)}$, and the parameter of the target model $\theta ^{(t)}$. The E-step and the crowd parameter update in the M-step are similar to the Dawid-Skene model BIBREF14. The keyword expectation is inferred by taking into account both the crowd-contributed labels and the model prediction: The parameter of the target model is updated by gradient descent. For example, when the target model to be trained is a deep neural network, we use back-propagation with gradient descent to update the weight matrices. Experiments and Results This section presents our experimental setup and results for evaluating our approach. We aim at answering the following questions: [noitemsep,leftmargin=*] Q1: How effectively does our proposed human-AI loop approach enhance the state-of-the-art machine learning models for event detection? Q2: How well does our keyword discovery method work compare to existing keyword expansion methods? Q3: How effective is our approach using crowdsourcing at obtaining new keywords compared with an approach labelling microposts for model training under the same cost? Q4: How much benefit does our unified probabilistic model bring compared to methods that do not take crowd reliability into account? Experiments and Results ::: Experimental Setup Datasets. We perform our experiments with two predetermined event categories: cyber security (CyberAttack) and death of politicians (PoliticianDeath). These event categories are chosen as they are representative of important event types that are of interest to many governments and companies. The need to create our own dataset was motivated by the lack of public datasets for event detection on microposts. The few available datasets do not suit our requirements. For example, the publicly available Events-2012 Twitter dataset BIBREF20 contains generic event descriptions such as Politics, Sports, Culture etc. Our work targets more specific event categories BIBREF21. Following previous studies BIBREF1, we collect event-related microposts from Twitter using 11 and 8 seed events (see Section SECREF2) for CyberAttack and PoliticianDeath, respectively. Unlabeled microposts are collected by using the keyword `hack' for CyberAttack, while for PoliticianDeath, we use a set of keywords related to `politician' and `death' (such as `bureaucrat', `dead' etc.) For each dataset, we randomly select 500 tweets from the unlabeled subset and manually label them for evaluation. Table TABREF25 shows key statistics from our two datasets. Comparison Methods. To demonstrate the generality of our approach on different event detection models, we consider Logistic Regression (LR) BIBREF1 and Multilayer Perceptron (MLP) BIBREF2 as the target models. As the goal of our experiments is to demonstrate the effectiveness of our approach as a new model training technique, we use these widely used models. Also, we note that in our case other neural network models with more complex network architectures for event detection, such as the bi-directional LSTM BIBREF17, turn out to be less effective than a simple feedforward network. For both LR and MLP, we evaluate our proposed human-AI loop approach for keyword discovery and expectation estimation by comparing against the weakly supervised learning method proposed by BIBREF1 (BIBREF1) and BIBREF17 (BIBREF17) where only one initial keyword is used with an expectation estimated by an individual expert. Parameter Settings. We empirically set optimal parameters based on a held-out validation set that contains 20% of the test data. These include the hyperparamters of the target model, those of our proposed probabilistic model, and the parameters used for training the target model. We explore MLP with 1, 2 and 3 hidden layers and apply a grid search in 32, 64, 128, 256, 512 for the dimension of the embeddings and that of the hidden layers. For the coefficient of expectation regularization, we follow BIBREF6 (BIBREF6) and set it to $\lambda =10 \times $ #labeled examples. For model training, we use the Adam BIBREF22 optimization algorithm for both models. Evaluation. Following BIBREF1 (BIBREF1) and BIBREF3 (BIBREF3), we use accuracy and area under the precision-recall curve (AUC) metrics to measure the performance of our proposed approach. We note that due to the imbalance in our datasets (20% positive microposts in CyberAttack and 27% in PoliticianDeath), accuracy is dominated by negative examples; AUC, in comparison, better characterizes the discriminative power of the model. Crowdsourcing. We chose Level 3 workers on the Figure-Eight crowdsourcing platform for our experiments. The inter-annotator agreement in micropost classification is taken into account through the EM algorithm. For keyword discovery, we filter keywords based on the frequency of the keyword being selected by the crowd. In terms of cost-effectiveness, our approach is motivated from the fact that crowdsourced data annotation can be expensive, and is thus designed with minimal crowd involvement. For each iteration, we selected 50 tweets for keyword discovery and 50 tweets for micropost classification per keyword. For a dataset with 80k tweets (e.g., CyberAttack), our approach only requires to manually inspect 800 tweets (for 8 keywords), which is only 1% of the entire dataset. Experiments and Results ::: Results of our Human-AI Loop (Q1) Table TABREF26 reports the evaluation of our approach on both the CyberAttack and PoliticianDeath event categories. Our approach is configured such that each iteration starts with 1 new keyword discovered in the previous iteration. Our approach improves LR by 5.17% (Accuracy) and 18.38% (AUC), and MLP by 10.71% (Accuracy) and 30.27% (AUC) on average. Such significant improvements clearly demonstrate that our approach is effective at improving model performance. We observe that the target models generally converge between the 7th and 9th iteration on both datasets when performance is measured by AUC. The performance can slightly degrade when the models are further trained for more iterations on both datasets. This is likely due to the fact that over time, the newly discovered keywords entail lower novel information for model training. For instance, for the CyberAttack dataset the new keyword in the 9th iteration `election' frequently co-occurs with the keyword `russia' in the 5th iteration (in microposts that connect Russian hackers with US elections), thus bringing limited new information for improving the model performance. As a side remark, we note that the models converge faster when performance is measured by accuracy. Such a comparison result confirms the difference between the metrics and shows the necessity for more keywords to discriminate event-related microposts from non event-related ones. Experiments and Results ::: Comparative Results on Keyword Discovery (Q2) Figure FIGREF31 shows the evaluation of our approach when discovering new informative keywords for model training (see Section SECREF2: Keyword Discovery). We compare our human-AI collaborative way of discovering new keywords against a query expansion (QE) approach BIBREF23, BIBREF24 that leverages word embeddings to find similar words in the latent semantic space. Specifically, we use pre-trained word embeddings based on a large Google News dataset for query expansion. For instance, the top keywords resulting from QE for `politician' are, `deputy',`ministry',`secretary', and `minister'. For each of these keywords, we use the crowd to label a set of tweets and obtain a corresponding expectation. We observe that our approach consistently outperforms QE by an average of $4.62\%$ and $52.58\%$ AUC on CyberAttack and PoliticianDeath, respectively. The large gap between the performance improvements for the two datasets is mainly due to the fact that microposts that are relevant for PoliticianDeath are semantically more complex than those for CyberAttack, as they encode noun-verb relationship (e.g., “the king of ... died ...”) rather than a simple verb (e.g., “... hacked.”) for the CyberAttack microposts. QE only finds synonyms of existing keywords related to either `politician' or `death', however cannot find a meaningful keyword that fully characterizes the death of a politician. For instance, QE finds the keywords `kill' and `murder', which are semantically close to `death' but are not specifically relevant to the death of a politician. Unlike QE, our approach identifies keywords that go beyond mere synonyms and that are more directly related to the end task, i.e., discriminating event-related microposts from non related ones. Examples are `demise' and `condolence'. As a remark, we note that in Figure FIGREF31(b), the increase in QE performance on PoliticianDeath is due to the keywords `deputy' and `minister', which happen to be highly indicative of the death of a politician in our dataset; these keywords are also identified by our approach. Experiments and Results ::: Cost-Effectiveness Results (Q3) To demonstrate the cost-effectiveness of using crowdsourcing for obtaining new keywords and consequently, their expectations, we compare the performance of our approach with an approach using crowdsourcing to only label microposts for model training at the same cost. Specifically, we conducted an additional crowdsourcing experiment where the same cost used for keyword discovery in our approach is used to label additional microposts for model training. These newly labeled microposts are used with the microposts labeled in the micropost classification task of our approach (see Section SECREF2: Micropost Classification) and the expectation of the initial keyword to train the model for comparison. The model trained in this way increases AUC by 0.87% for CyberAttack, and by 1.06% for PoliticianDeath; in comparison, our proposed approach increases AUC by 33.42% for PoliticianDeath and by 15.23% for CyberAttack over the baseline presented by BIBREF1). These results show that using crowdsourcing for keyword discovery is significantly more cost-effective than simply using crowdsourcing to get additional labels when training the model. Experiments and Results ::: Expectation Inference Results (Q4) To investigate the effectiveness of our expectation inference method, we compare it against a majority voting approach, a strong baseline in truth inference BIBREF16. Figure FIGREF36 shows the result of this evaluation. We observe that our approach results in better models for both CyberAttack and PoliticianDeath. Our manual investigation reveals that workers' annotations are of high reliability, which explains the relatively good performance of majority voting. Despite limited margin for improvement, our method of expectation inference improves the performance of majority voting by $0.4\%$ and $1.19\%$ AUC on CyberAttack and PoliticianDeath, respectively. Related Work Event Detection. The techniques for event extraction from microblogging platforms can be classified according to their domain specificity and their detection method BIBREF0. Early works mainly focus on open domain event detection BIBREF25, BIBREF26, BIBREF27. Our work falls into the category of domain-specific event detection BIBREF21, which has drawn increasing attention due to its relevance for various applications such as cyber security BIBREF1, BIBREF2 and public health BIBREF4, BIBREF5. In terms of technique, our proposed detection method is related to the recently proposed weakly supervised learning methods BIBREF1, BIBREF17, BIBREF3. This comes in contrast with fully-supervised learning methods, which are often limited by the size of the training data (e.g., a few hundred examples) BIBREF28, BIBREF29. Human-in-the-Loop Approaches. Our work extends weakly supervised learning methods by involving humans in the loop BIBREF13. Existing human-in-the-loop approaches mainly leverage crowds to label individual data instances BIBREF9, BIBREF10 or to debug the training data BIBREF30, BIBREF31 or components BIBREF32, BIBREF33, BIBREF34 of a machine learning system. Unlike these works, we leverage crowd workers to label sampled microposts in order to obtain keyword-specific expectations, which can then be generalized to help classify microposts containing the same keyword, thus amplifying the utility of the crowd. Our work is further connected to the topic of interpretability and transparency of machine learning models BIBREF11, BIBREF35, BIBREF12, for which humans are increasingly involved, for instance for post-hoc evaluations of the model's interpretability. In contrast, our approach directly solicits informative keywords from the crowd for model training, thereby providing human-understandable explanations for the improved model. Conclusion In this paper, we presented a new human-AI loop approach for keyword discovery and expectation estimation to better train event detection models. Our approach takes advantage of the disagreement between the crowd and the model to discover informative keywords and leverages the joint power of the crowd and the model in expectation inference. We evaluated our approach on real-world datasets and showed that it significantly outperforms the state of the art and that it is particularly useful for detecting events where relevant microposts are semantically complex, e.g., the death of a politician. As future work, we plan to parallelize the crowdsourcing tasks and optimize our pipeline in order to use our event detection approach in real-time. Acknowledgements This project has received funding from the Swiss National Science Foundation (grant #407540_167320 Tighten-it-All) and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 683253/GraphInt). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What type of classifiers are used? Only give me the answer and do not output any other words.
Answer:
[ "probabilistic model", "Logistic Regression, Multilayer Perceptron" ]
276
6,548
single_doc_qa
1fb9e31a331e4654b1c5a2e83a531653
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples: The first punning joke exploits the sound similarity between the word “propane" and the latent target “profane", which can be categorized into the group of heterographic puns. Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word “gut". Pun detection is the task of detecting whether there is a pun residing in the given text. The goal of pun location is to find the exact word appearing in the text that implies more than one meanings. Most previous work addresses such two tasks separately and develop separate systems BIBREF10 , BIBREF5 . Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training. For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun. Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used. Compared to the pipeline methods, joint learning has been shown effective BIBREF11 , BIBREF12 since it is able to reduce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks. In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model. The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag. Since each context contains a maximum of one pun BIBREF9 , we design a novel tagging scheme to capture this structural constraint. Statistics on the corpora also show that a pun tends to appear in the second half of a context. To capture such a structural property, we also incorporate word position knowledge into our structured prediction model. Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks. To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach. Problem Definition We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }: INLINEFORM0 tag means the current word is not a pun. INLINEFORM0 tag means the current word is a pun. If the tag sequence of a sentence contains a INLINEFORM0 tag, then the text contains a pun and the word corresponding to INLINEFORM1 is the pun. The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }. INLINEFORM0 tag indicates that the current word appears before the pun in the given context. INLINEFORM0 tag highlights the current word is a pun. INLINEFORM0 tag indicates that the current word appears after the pun. We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text. Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure. Under the two schemes, if a sentence does not contain any puns, all words will be tagged with INLINEFORM0 or INLINEFORM1 , respectively. Exemplified by the second sentence “Some diets cause a gut reaction," the pun is given as “gut." Thus, under the INLINEFORM2 scheme, it should be tagged with INLINEFORM3 , while the words before it are assigned with the tag INLINEFORM4 and words after it are with INLINEFORM5 , as illustrated in Figure FIGREF8 . Likewise, the INLINEFORM6 scheme tags the word “gut" with INLINEFORM7 , while other words are tagged with INLINEFORM8 . Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach. Model Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 . Input. The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb. To capture this characteristic, we consider lexical features at the character level. Similar to the work of BIBREF15 , the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences. Nonlinear transformations are then applied to the character embeddings by highway networks BIBREF18 , which map the character-level features into different semantic spaces. We also observe that a pun tends to appear at the end of a sentence. Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively. We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information. A binary indicator can be mapped to a vector representation using a randomly initialized embedding table BIBREF19 , BIBREF20 . In this work, we directly adopt the value of the binary indicator as part of the input. The concatenation of the transformed character embeddings, the pre-trained word embeddings BIBREF21 , and the position indicators are taken as input of our model. Tagging. The input is then fed into a BiLSTM network, which will be able to capture contextual information. For a training instance INLINEFORM0 , we suppose the output by the word-level BiLSTM is INLINEFORM1 . The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models BIBREF14 , BIBREF15 . The conditional probability is defined as: where INLINEFORM0 is a set of all possible label sequences consisting of tags from INLINEFORM1 (or INLINEFORM2 ), INLINEFORM3 and INLINEFORM4 are weight and bias parameters corresponding to the label pair INLINEFORM5 . During training, we minimize the negative log-likelihood summed over all training instances: where INLINEFORM0 refers to the INLINEFORM1 -th instance in the training set. During testing, we aim to find the optimal label sequence for a new input INLINEFORM2 : This search process can be done efficiently using the Viterbi algorithm. Datasets and Settings We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end. For each fold, we randomly select 10% of the instances from the training set for development. Word embeddings are initialized with the 100-dimensional Glove BIBREF21 . The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training. The pre-trained word embeddings are not updated during training. The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300. We adopt stochastic gradient descent (SGD) BIBREF26 with a learning rate of 0.015. For the pun detection task, if the predicted tag sequence contains at least one INLINEFORM0 tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false. For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset. As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun. We report precision, recall and INLINEFORM1 score in Table TABREF11 . A list of prior works that did not employ joint learning are also shown in the first block of Table TABREF11 . Results We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger BIBREF27 , n-grams, label transitions, word suffixes and relative position to the end of the text are considered. We can see that our model with the INLINEFORM0 tagging scheme yields new state-of-the-art INLINEFORM1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block. For location on heterographic puns, our model's performance is slightly lower than the system of BIBREF25 , which is a rule-based locator. Compared to CRF, we can see that our model, either with the INLINEFORM2 or the INLINEFORM3 scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close. This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts – such information appears to be helpful in recalling more puns. Compared to the INLINEFORM0 scheme, the INLINEFORM1 tagging scheme is able to yield better performance on these two tasks. After studying outputs from these two approaches, we found that one leading source of error for the INLINEFORM2 approach is that there exist more than one words in a single instance that are assigned with the INLINEFORM3 tag. However, according to the description of pun in BIBREF9 , each context contains a maximum of one pun. Thus, such a useful structural constraint is not well captured by the simple approach based on the INLINEFORM4 tagging scheme. On the other hand, by applying the INLINEFORM5 tagging scheme, such a constraint is properly captured in the model. As a result, the results for such a approach are significantly better than the approach based on the INLINEFORM6 tagging scheme, as we can observe from the table. Under the same experimental setup, we also attempted to exclude word position features. Results are given by INLINEFORM7 - INLINEFORM8 . It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence. While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task. One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information. Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately. It can be regarded as a “pipeline” method where the classifier for pun detection is regarded as perfect. Following the prior work of BIBREF4 , we apply 10-fold cross validation. Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table TABREF11 . Compared with our approaches, the performance of such an approach drops significantly. On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other. These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner. Error Analysis We studied the outputs from our system and make some error analysis. We found the errors can be broadly categorized into several types, and we elaborate them here. 1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set. Learning the representations of such unseen words is challenging, which affects the model's performance. Such errors contribute around 40% of the total errors made by our system. 2) Detection errors: we found many errors are due to the model's inability to make correct pun detection. Such inability harms both pun detection and pun location. Although our approach based on the INLINEFORM0 tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group. 3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short. Consider the example “Superglue! Tom rejoined," here the word rejoined is the corresponding pun. However, it would be challenging to figure out the pun with such limited contextual information. Related Work Most existing systems address pun detection and location separately. BIBREF22 applied word sense knowledge to conduct pun detection. BIBREF24 trained a bidirectional RNN classifier for detecting homographic puns. Next, a knowledge-based approach is adopted to find the exact pun. Such a system is not applicable to heterographic puns. BIBREF28 applied Google n-gram and word2vec to make decisions. The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns. BIBREF10 used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns. BIBREF23 used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features. BIBREF25 proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics. Two systems are developed to conduct detection and location separately in the system known as UWAV BIBREF3 . The pun detector combines predictions from three classifiers. The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun. The state-of-the-art system for homographic pun location is a neural method BIBREF4 , where the word senses are incorporated into a bidirectional LSTM model. This method only supports the pun location task on homographic puns. Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER. The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN BIBREF13 , GRNN BIBREF29 , LSTM-CRF BIBREF30 , LSTM-CNN-CRF BIBREF14 , LM-LSTM-CRF BIBREF15 . In this work, we combine pun detection and location tasks as a single sequence labeling problem. Inspired by the work of BIBREF15 , we also adopt a LSTM-CRF with character embeddings to make labeling decisions. Conclusion In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective. We observe that each text in our corpora contains a maximum of one pun. Hence, we design a novel tagging scheme to incorporate such a constraint. Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase. We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works. Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns. Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective. Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns. Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies. Acknowledgments We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the tagging scheme employed? Only give me the answer and do not output any other words.
Answer:
[ "A new tagging scheme that tags the words before and after the pun as well as the pun words.", "a new tagging scheme consisting of three tags, namely { INLINEFORM0 }" ]
276
3,748
single_doc_qa
4bab02cd69bb4b30a637d47118b4cf21
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Sarcasm is defined as “a sharp, bitter, or cutting expression or remark; a bitter gibe or taunt”. As the fields of affective computing and sentiment analysis have gained increasing popularity BIBREF0 , it is a major concern to detect sarcastic, ironic, and metaphoric expressions. Sarcasm, especially, is key for sentiment analysis as it can completely flip the polarity of opinions. Understanding the ground truth, or the facts about a given event, allows for the detection of contradiction between the objective polarity of the event (usually negative) and its sarcastic characteristic by the author (usually positive), as in “I love the pain of breakup”. Obtaining such knowledge is, however, very difficult. In our experiments, we exposed the classifier to such knowledge extracted indirectly from Twitter. Namely, we used Twitter data crawled in a time period, which likely contain both the sarcastic and non-sarcastic accounts of an event or similar events. We believe that unambiguous non-sarcastic sentences provided the classifier with the ground-truth polarity of those events, which the classifier could then contrast with the opposite estimations in sarcastic sentences. Twitter is a more suitable resource for this purpose than blog posts, because the polarity of short tweets is easier to detect (as all the information necessary to detect polarity is likely to be contained in the same sentence) and because the Twitter API makes it easy to collect a large corpus of tweets containing both sarcastic and non-sarcastic examples of the same event. Sometimes, however, just knowing the ground truth or simple facts on the topic is not enough, as the text may refer to other events in order to express sarcasm. For example, the sentence “If Hillary wins, she will surely be pleased to recall Monica each time she enters the Oval Office :P :D”, which refers to the 2016 US presidential election campaign and to the events of early 1990's related to the US president Clinton, is sarcastic because Hillary, a candidate and Clinton's wife, would in fact not be pleased to recall her husband's alleged past affair with Monica Lewinsky. The system, however, would need a considerable amount of facts, commonsense knowledge, anaphora resolution, and logical reasoning to draw such a conclusion. In this paper, we will not deal with such complex cases. Existing works on sarcasm detection have mainly focused on unigrams and the use of emoticons BIBREF1 , BIBREF2 , BIBREF3 , unsupervised pattern mining approach BIBREF4 , semi-supervised approach BIBREF5 and n-grams based approach BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 with sentiment features. Instead, we propose a framework that learns sarcasm features automatically from a sarcasm corpus using a convolutional neural network (CNN). We also investigate whether features extracted using the pre-trained sentiment, emotion and personality models can improve sarcasm detection performance. Our approach uses relatively lower dimensional feature vectors and outperforms the state of the art on different datasets. In summary, the main contributions of this paper are the following: The rest of the paper is organized as follows: Section SECREF2 proposes a brief literature review on sarcasm detection; Section SECREF4 presents the proposed approach; experimental results and thorough discussion on the experiments are given in Section SECREF5 ; finally, Section SECREF6 concludes the paper. Related Works NLP research is gradually evolving from lexical to compositional semantics BIBREF10 through the adoption of novel meaning-preserving and context-aware paradigms such as convolutional networks BIBREF11 , recurrent belief networks BIBREF12 , statistical learning theory BIBREF13 , convolutional multiple kernel learning BIBREF14 , and commonsense reasoning BIBREF15 . But while other NLP tasks have been extensively investigated, sarcasm detection is a relatively new research topic which has gained increasing interest only recently, partly thanks to the rise of social media analytics and sentiment analysis. Sentiment analysis BIBREF16 and using multimodal information as a new trend BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 , BIBREF14 is a popular branch of NLP research that aims to understand sentiment of documents automatically using combination of various machine learning approaches BIBREF21 , BIBREF22 , BIBREF20 , BIBREF23 . An early work in this field was done by BIBREF6 on a dataset of 6,600 manually annotated Amazon reviews using a kNN-classifier over punctuation-based and pattern-based features, i.e., ordered sequence of high frequency words. BIBREF1 used support vector machine (SVM) and logistic regression over a feature set of unigrams, dictionary-based lexical features and pragmatic features (e.g., emoticons) and compared the performance of the classifier with that of humans. BIBREF24 described a set of textual features for recognizing irony at a linguistic level, especially in short texts created via Twitter, and constructed a new model that was assessed along two dimensions: representativeness and relevance. BIBREF5 used the presence of a positive sentiment in close proximity of a negative situation phrase as a feature for sarcasm detection. BIBREF25 used the Balanced Window algorithm for classifying Dutch tweets as sarcastic vs. non-sarcastic; n-grams (uni, bi and tri) and intensifiers were used as features for classification. BIBREF26 compared the performance of different classifiers on the Amazon review dataset using the imbalance between the sentiment expressed by the review and the user-given star rating. Features based on frequency (gap between rare and common words), written spoken gap (in terms of difference between usage), synonyms (based on the difference in frequency of synonyms) and ambiguity (number of words with many synonyms) were used by BIBREF3 for sarcasm detection in tweets. BIBREF9 proposed the use of implicit incongruity and explicit incongruity based features along with lexical and pragmatic features, such as emoticons and punctuation marks. Their method is very much similar to the method proposed by BIBREF5 except BIBREF9 used explicit incongruity features. Their method outperforms the approach by BIBREF5 on two datasets. BIBREF8 compared the performance with different language-independent features and pre-processing techniques for classifying text as sarcastic and non-sarcastic. The comparison was done over three Twitter dataset in two different languages, two of these in English with a balanced and an imbalanced distribution and the third one in Czech. The feature set included n-grams, word-shape patterns, pointedness and punctuation-based features. In this work, we use features extracted from a deep CNN for sarcasm detection. Some of the key differences between the proposed approach and existing methods include the use of a relatively smaller feature set, automatic feature extraction, the use of deep networks, and the adoption of pre-trained NLP models. Sentiment Analysis and Sarcasm Detection Sarcasm detection is an important subtask of sentiment analysis BIBREF27 . Since sarcastic sentences are subjective, they carry sentiment and emotion-bearing information. Most of the studies in the literature BIBREF28 , BIBREF29 , BIBREF9 , BIBREF30 include sentiment features in sarcasm detection with the use of a state-of-the-art sentiment lexicon. Below, we explain how sentiment information is key to express sarcastic opinions and the approach we undertake to exploit such information for sarcasm detection. In general, most sarcastic sentences contradict the fact. In the sentence “I love the pain present in the breakups" (Figure FIGREF4 ), for example, the word “love" contradicts “pain present in the breakups”, because in general no-one loves to be in pain. In this case, the fact (i.e., “pain in the breakups") and the contradictory statement to that fact (i.e., “I love") express sentiment explicitly. Sentiment shifts from positive to negative but, according to sentic patterns BIBREF31 , the literal sentiment remains positive. Sentic patterns, in fact, aim to detect the polarity expressed by the speaker; thus, whenever the construction “I love” is encountered, the sentence is positive no matter what comes after it (e.g., “I love the movie that you hate”). In this case, however, the sentence carries sarcasm and, hence, reflects the negative sentiment of the speaker. In another example (Figure FIGREF4 ), the fact, i.e., “I left the theater during the interval", has implicit negative sentiment. The statement “I love the movie" contradicts the fact “I left the theater during the interval"; thus, the sentence is sarcastic. Also in this case the sentiment shifts from positive to negative and hints at the sarcastic nature of the opinion. The above discussion has made clear that sentiment (and, in particular, sentiment shifts) can largely help to detect sarcasm. In order to include sentiment shifting into the proposed framework, we train a sentiment model for sentiment-specific feature extraction. Training with a CNN helps to combine the local features in the lower layers into global features in the higher layers. We do not make use of sentic patterns BIBREF31 in this paper but we do plan to explore that research direction as a part of our future work. In the literature, it is found that sarcasm is user-specific too, i.e., some users have a particular tendency to post more sarcastic tweets than others. This acts as a primary intuition for us to extract personality-based features for sarcasm detection. The Proposed Framework As discussed in the literature BIBREF5 , sarcasm detection may depend on sentiment and other cognitive aspects. For this reason, we incorporate both sentiment and emotion clues in our framework. Along with these, we also argue that personality of the opinion holder is an important factor for sarcasm detection. In order to address all of these variables, we create different models for each of them, namely: sentiment, emotion and personality. The idea is to train each model on its corresponding benchmark dataset and, hence, use such pre-trained models together to extract sarcasm-related features from the sarcasm datasets. Now, the viable research question here is - Do these models help to improve sarcasm detection performance?' Literature shows that they improve the performance but not significantly. Thus, do we need to consider those factors in spotting sarcastic sentences? Aren't n-grams enough for sarcasm detection? Throughout the rest of this paper, we address these questions in detail. The training of each model is done using a CNN. Below, we explain the framework in detail. Then, we discuss the pre-trained models. Figure FIGREF6 presents a visualization of the proposed framework. General CNN Framework CNN can automatically extract key features from the training data. It grasps contextual local features from a sentence and, after several convolution operations, it forms a global feature vector out of those local features. CNN does not need the hand-crafted features used in traditional supervised classifiers. Such hand-crafted features are difficult to compute and a good guess for encoding the features is always necessary in order to get satisfactory results. CNN, instead, uses a hierarchy of local features which are important to learn context. The hand-crafted features often ignore such a hierarchy of local features. Features extracted by CNN can therefore be used instead of hand-crafted features, as they carry more useful information. The idea behind convolution is to take the dot product of a vector of INLINEFORM0 weights INLINEFORM1 also known as kernel vector with each INLINEFORM2 -gram in the sentence INLINEFORM3 to obtain another sequence of features INLINEFORM4 . DISPLAYFORM0 Thus, a max pooling operation is applied over the feature map and the maximum value INLINEFORM0 is taken as the feature corresponding to this particular kernel vector. Similarly, varying kernel vectors and window sizes are used to obtain multiple features BIBREF32 . For each word INLINEFORM1 in the vocabulary, a INLINEFORM2 -dimensional vector representation is given in a look up table that is learned from the data BIBREF33 . The vector representation of a sentence, hence, is a concatenation of vectors for individual words. Similarly, we can have look up tables for other features. One might want to provide features other than words if these features are suspected to be helpful. The convolution kernels are then applied to word vectors instead of individual words. We use these features to train higher layers of the CNN, in order to represent bigger groups of words in sentences. We denote the feature learned at hidden neuron INLINEFORM0 in layer INLINEFORM1 as INLINEFORM2 . Multiple features may be learned in parallel in the same CNN layer. The features learned in each layer are used to train the next layer: DISPLAYFORM0 where * indicates convolution and INLINEFORM0 is a weight kernel for hidden neuron INLINEFORM1 and INLINEFORM2 is the total number of hidden neurons. The CNN sentence model preserves the order of words by adopting convolution kernels of gradually increasing sizes that span an increasing number of words and ultimately the entire sentence. As mentioned above, each word in a sentence is represented using word embeddings. We employ the publicly available word2vec vectors, which were trained on 100 billion words from Google News. The vectors are of dimensionality 300, trained using the continuous bag-of-words architecture BIBREF33 . Words not present in the set of pre-trained words are initialized randomly. However, while training the neural network, we use non-static representations. These include the word vectors, taken as input, into the list of parameters to be learned during training. Two primary reasons motivated us to use non-static channels as opposed to static ones. Firstly, the common presence of informal language and words in tweets resulted in a relatively high random initialization of word vectors due to the unavailability of these words in the word2vec dictionary. Secondly, sarcastic sentences are known to include polarity shifts in sentimental and emotional degrees. For example, “I love the pain present in breakups" is a sarcastic sentence with a significant change in sentimental polarity. As word2vec was not trained to incorporate these nuances, we allow our models to update the embeddings during training in order to include them. Each sentence is wrapped to a window of INLINEFORM0 , where INLINEFORM1 is the maximum number of words amongst all sentences in the dataset. We use the output of the fully-connected layer of the network as our feature vector. We have done two kinds of experiments: firstly, we used CNN for the classification; secondly, we extracted features from the fully-connected layer of the CNN and fed them to an SVM for the final classification. The latter CNN-SVM scheme is quite useful for text classification as shown by Poria et al. BIBREF18 . We carry out n-fold cross-validation on the dataset using CNN. In every fold iteration, in order to obtain the training and test features, the output of the fully-connected layer is treated as features to be used for the final classification using SVM. Table TABREF12 shows the training settings for each CNN model developed in this work. ReLU is used as the non-linear activation function of the network. The network configurations of all models developed in this work are given in Table TABREF12 . Sentiment Feature Extraction Model As discussed above, sentiment clues play an important role for sarcastic sentence detection. In our work, we train a CNN (see Section SECREF5 for details) on a sentiment benchmark dataset. This pre-trained model is then used to extract features from the sarcastic datasets. In particular, we use Semeval 2014 BIBREF34 Twitter Sentiment Analysis Dataset for the training. This dataset contains 9,497 tweets out of which 5,895 are positive, 3,131 are negative and 471 are neutral. The fully-connected layer of the CNN used for sentiment feature extraction has 100 neurons, so 100 features are extracted from this pre-trained model. The final softmax determines whether a sentence is positive, negative or neutral. Thus, we have three neurons in the softmax layer. Emotion Feature Extraction Model We use the CNN structure as described in Section SECREF5 for emotional feature extraction. As a dataset for extracting emotion-related features, we use the corpus developed by BIBREF35 . This dataset consists of blog posts labeled by their corresponding emotion categories. As emotion taxonomy, the authors used six basic emotions, i.e., Anger, Disgust, Surprise, Sadness, Joy and Fear. In particular, the blog posts were split into sentences and each sentence was labeled. The dataset contains 5,205 sentences labeled by one of the emotion labels. After employing this model on the sarcasm dataset, we obtained a 150-dimensional feature vector from the fully-connected layer. As the aim of training is to classify each sentence into one of the six emotion classes, we used six neurons in the softmax layer. Personality Feature Extraction Model Detecting personality from text is a well-known challenging problem. In our work, we use five personality traits described by BIBREF36 , i.e., Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism, sometimes abbreviated as OCEAN (by their first letters). As a training dataset, we use the corpus developed by BIBREF36 , which contains 2,400 essays labeled by one of the five personality traits each. The fully-connected layer has 150 neurons, which are treated as the features. We concatenate the feature vector of each personality dimension in order to create the final feature vector. Thus, the personality model ultimately extracts a 750-dimensional feature vector (150-dimensional feature vector for each of the five personality traits). This network is replicated five times, one for each personality trait. In particular, we create a CNN for each personality trait and the aim of each CNN is to classify a sentence into binary classes, i.e., whether it expresses a personality trait or not. Baseline Method and Features CNN can also be employed on the sarcasm datasets in order to identify sarcastic and non-sarcastic tweets. We term the features extracted from this network baseline features, the method as baseline method and the CNN architecture used in this baseline method as baseline CNN. Since the fully-connected layer has 100 neurons, we have 100 baseline features in our experiment. This method is termed baseline method as it directly aims to classify a sentence as sarcastic vs non-sarcastic. The baseline CNN extracts the inherent semantics from the sarcastic corpus by employing deep domain understanding. The process of using baseline features with other features extracted from the pre-trained model is described in Section SECREF24 . Experimental Results and Discussion In this section, we present the experimental results using different feature combinations and compare them with the state of the art. For each feature we show the results using only CNN and using CNN-SVM (i.e., when the features extracted by CNN are fed to the SVM). Macro-F1 measure is used as an evaluation scheme in the experiments. Sarcasm Datasets Used in the Experiment This dataset was created by BIBREF8 . The tweets were downloaded from Twitter using #sarcasm as a marker for sarcastic tweets. It is a monolingual English dataset which consists of a balanced distribution of 50,000 sarcastic tweets and 50,000 non-sarcastic tweets. Since sarcastic tweets are less frequently used BIBREF8 , we also need to investigate the robustness of the selected features and the model trained on these features on an imbalanced dataset. To this end, we used another English dataset from BIBREF8 . It consists of 25,000 sarcastic tweets and 75,000 non-sarcastic tweets. We have obtained this dataset from The Sarcasm Detector. It contains 120,000 tweets, out of which 20,000 are sarcastic and 100,000 are non-sarcastic. We randomly sampled 10,000 sarcastic and 20,000 non-sarcastic tweets from the dataset. Visualization of both the original and subset data show similar characteristics. A two-step methodology has been employed in filtering the datasets used in our experiments. Firstly, we identified and removed all the “user", “URL" and “hashtag" references present in the tweets using efficient regular expressions. Special emphasis was given to this step to avoid traces of hashtags, which might trigger the models to provide biased results. Secondly, we used NLTK Twitter Tokenizer to ensure proper tokenization of words along with special symbols and emoticons. Since our deep CNNs extract contextual information present in tweets, we include emoticons as part of the vocabulary. This enables the emoticons to hold a place in the word embedding space and aid in providing information about the emotions present in the sentence. Merging the Features Throughout this research, we have carried out several experiments with various feature combinations. For the sake of clarity, we explain below how the features extracted using difference models are merged. In the standard feature merging process, we first extract the features from all deep CNN based feature extraction models and then we concatenate them. Afterwards, SVM is employed on the resulted feature vector. In another setting, we use the features extracted from the pre-trained models as the static channels of features in the CNN of the baseline method. These features are appended to the hidden layer of the baseline CNN, preceding the final output softmax layer. For comparison, we have re-implemented the state-of-the-art methods. Since BIBREF9 did not mention about the sentiment lexicon they use in the experiment, we used SenticNet BIBREF37 in the re-implementation of their method. Results on Dataset 1 As shown in Table TABREF29 , for every feature CNN-SVM outperforms the performance of the CNN. Following BIBREF6 , we have carried out a 5-fold cross-validation on this dataset. The baseline features ( SECREF16 ) perform best among other features. Among all the pre-trained models, the sentiment model (F1-score: 87.00%) achieves better performance in comparison with the other two pre-trained models. Interestingly, when we merge the baseline features with the features extracted by the pre-trained deep NLP models, we only get 0.11% improvement over the F-score. It means that the baseline features alone are quite capable to detect sarcasm. On the other hand, when we combine sentiment, emotion and personality features, we obtain 90.70% F1-score. This indicates that the pre-trained features are indeed useful for sarcasm detection. We also compare our approach with the best research study conducted on this dataset (Table TABREF30 ). Both the proposed baseline model and the baseline + sentiment + emotion + personality model outperform the state of the art BIBREF9 , BIBREF8 . One important difference with the state of the art is that BIBREF8 used relatively larger feature vector size ( INLINEFORM0 500,000) than we used in our experiment (1,100). This not only prevents our model to overfit the data but also speeds up the computation. Thus, we obtain an improvement in the overall performance with automatic feature extraction using a relatively lower dimensional feature space. In the literature, word n-grams, skipgrams and character n-grams are used as baseline features. According to Ptacek et al. BIBREF8 , these baseline features along with the other features (sentiment features and part-of-speech based features) produced the best performance. However, Ptacek et al. did not analyze the performance of these features when they were not used with the baseline features. Pre-trained word embeddings play an important role in the performance of the classifier because, when we use randomly generated embeddings, performance falls down to 86.23% using all features. Results on Dataset 2 5-fold cross-validation has been carried out on Dataset 2. Also for this dataset, we get the best accuracy when we use all features. Baseline features have performed significantly better (F1-score: 92.32%) than all other features. Supporting the observations we have made from the experiments on Dataset 1, we see CNN-SVM outperforming CNN on Dataset 2. However, when we use all the features, CNN alone (F1-score: 89.73%) does not outperform the state of the art BIBREF8 (F1-score: 92.37%). As shown in Table TABREF30 , CNN-SVM on the baseline + sentiment + emotion + personality feature set outperforms the state of the art (F1-score: 94.80%). Among the pre-trained models, the sentiment model performs best (F1-score: 87.00%). Table TABREF29 shows the performance of different feature combinations. The gap between the F1-scores of only baseline features and all features is larger on the imbalanced dataset than the balanced dataset. This supports our claim that sentiment, emotion and personality features are very useful for sarcasm detection, thanks to the pre-trained models. The F1-score using sentiment features when combined with baseline features is 94.60%. On both of the datasets, emotion and sentiment features perform better than the personality features. Interestingly, using only sentiment, emotion and personality features, we achieve 90.90% F1-score. Results on Dataset 3 Experimental results on Dataset 3 show the similar trends (Table TABREF30 ) as compared to Dataset 1 and Dataset 2. The highest performance (F1-score 93.30%) is obtained when we combine baseline features with sentiment, emotion and personality features. In this case, also CNN-SVM consistently performs better than CNN for every feature combination. The sentiment model is found to be the best pre-trained model. F1-score of 84.43% is obtained when we merge sentiment, emotion and personality features. Dataset 3 is more complex and non-linear in nature compared to the other two datasets. As shown in Table TABREF30 , the methods by BIBREF9 and BIBREF8 perform poorly on this dataset. The TP rate achieved by BIBREF9 is only 10.07% and that means their method suffers badly on complex data. The approach of BIBREF8 has also failed to perform well on Dataset 3, achieving 62.37% with a better TP rate of 22.15% than BIBREF9 . On the other hand, our proposed model performs consistently well on this dataset achieving 93.30%. Testing Generalizability of the Models and Discussions To test the generalization capability of the proposed approach, we perform training on Dataset 1 and test on Dataset 3. The F1-score drops down dramatically to 33.05%. In order to understand this finding, we visualize each dataset using PCA (Figure FIGREF17 ). It depicts that, although Dataset 1 is mostly linearly separable, Dataset 3 is not. A linear kernel that performs well on Dataset 1 fails to provide good performance on Dataset 3. If we use RBF kernel, it overfits the data and produces worse results than what we get using linear kernel. Similar trends are seen in the performance of other two state-of-the-art approaches BIBREF9 , BIBREF8 . Thus, we decide to perform training on Dataset 3 and test on the Dataset 1. As expected better performance is obtained with F1-score 76.78%. However, the other two state-of-the-art approaches fail to perform well in this setting. While the method by BIBREF9 obtains F1-score of 47.32%, the approach by BIBREF8 achieves 53.02% F1-score when trained on Dataset 3 and tested on Dataset 1. Below, we discuss about this generalizability issue of the models developed or referred in this paper. As discussed in the introduction, sarcasm is very much topic-dependent and highly contextual. For example, let us consider the tweet “I am so glad to see Tanzania played very well, I can now sleep well :P". Unless one knows that Tanzania actually did not play well in that game, it is not possible to spot the sarcastic nature of this sentence. Thus, an n-gram based sarcasm detector trained at time INLINEFORM0 may perform poorly to detect sarcasm in the tweets crawled at time INLINEFORM1 (given that there is a considerable gap between these time stamps) because of the diversity of the topics (new events occur, new topics are discussed) of the tweets. Sentiment and other contextual clues can help to spot the sarcastic nature in this kind of tweets. A highly positive statement which ends with a emoticon expressing joke can be sarcastic. State-of-the-art methods lack these contextual information which, in our case, we extract using pre-trained sentiment, emotion and personality models. Not only these pre-trained models, the baseline method (baseline CNN architecture) performs better than the state-of-the-art models in this generalizability test setting. In our generalizability test, when the pre-trained features are used with baseline features, we get 4.19% F1-score improvement over the baseline features. On the other hand, when they are not used with the baseline features, together they produce 64.25% F1-score. Another important fact is that an n-grams model cannot perform well on unseen data unless it is trained on a very large corpus. If most of the n-grams extracted from the unseen data are not in the vocabulary of the already trained n-grams model, in fact, the model will produce a very sparse feature vector representation of the dataset. Instead, we use the word2vec embeddings as the source of the features, as word2vec allows for the computation of similarities between unseen data and training data. Baseline Features vs Pre-trained Features Our experimental results show that the baseline features outperform the pre-trained features for sarcasm detection. However, the combination of pre-trained features and baseline features beats both of themselves alone. It is counterintuitive, since experimental results prove that both of those features learn almost the same global and contextual features. In particular, baseline network dominates over pre-trained network as the former learns most of the features learned by the latter. Nonetheless, the combination of baseline and pre-trained classifiers improves the overall performance and generalizability, hence proving their effectiveness in sarcasm detection. Experimental results show that sentiment and emotion features are the most useful features, besides baseline features (Figure FIGREF36 ). Therefore, in order to reach a better understanding of the relation between personality features among themselves and with other pre-trained features, we carried out Spearman correlation testing. Results, displayed in Table TABREF39 , show that those features are highly correlated with each other. Conclusion In this work, we developed pre-trained sentiment, emotion and personality models for identifying sarcastic text using CNN, which are found to be very effective for sarcasm detection. In the future, we plan to evaluate the performance of the proposed method on a large corpus and other domain-dependent corpora. Future work will also focus on analyzing past tweets and activities of users in order to better understand their personality and profile and, hence, further improve the disambiguation between sarcastic and non-sarcastic text. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the network's baseline features? Only give me the answer and do not output any other words.
Answer:
[ " The features extracted from CNN." ]
276
6,415
single_doc_qa
f74520b6919444e0809a6a2d6f6b3753
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Produced by Sue Asscher The Witch of Atlas by Percy Bysshe Shelley TO MARY (ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE SCORE OF ITS CONTAINING NO HUMAN INTEREST). 1. How, my dear Mary,--are you critic-bitten (For vipers kill, though dead) by some review, That you condemn these verses I have written, Because they tell no story, false or true? What, though no mice are caught by a young kitten, _5 May it not leap and play as grown cats do, Till its claws come? Prithee, for this one time, Content thee with a visionary rhyme. 2. What hand would crush the silken-winged fly, The youngest of inconstant April's minions, _10 Because it cannot climb the purest sky, Where the swan sings, amid the sun's dominions? Not thine. Thou knowest 'tis its doom to die, When Day shall hide within her twilight pinions The lucent eyes, and the eternal smile, _15 Serene as thine, which lent it life awhile. 3. To thy fair feet a winged Vision came, Whose date should have been longer than a day, And o'er thy head did beat its wings for fame, And in thy sight its fading plumes display; _20 The watery bow burned in the evening flame. But the shower fell, the swift Sun went his way-- And that is dead.--O, let me not believe That anything of mine is fit to live! 4. Wordsworth informs us he was nineteen years _25 Considering and retouching Peter Bell; Watering his laurels with the killing tears Of slow, dull care, so that their roots to Hell Might pierce, and their wide branches blot the spheres Of Heaven, with dewy leaves and flowers; this well _30 May be, for Heaven and Earth conspire to foil The over-busy gardener's blundering toil. 5. My Witch indeed is not so sweet a creature As Ruth or Lucy, whom his graceful praise Clothes for our grandsons--but she matches Peter, _35 Though he took nineteen years, and she three days In dressing. Light the vest of flowing metre She wears; he, proud as dandy with his stays, Has hung upon his wiry limbs a dress Like King Lear's 'looped and windowed raggedness.' _40 6. If you strip Peter, you will see a fellow Scorched by Hell's hyperequatorial climate Into a kind of a sulphureous yellow: A lean mark, hardly fit to fling a rhyme at; In shape a Scaramouch, in hue Othello. _45 If you unveil my Witch, no priest nor primate Can shrive you of that sin,--if sin there be In love, when it becomes idolatry. THE WITCH OF ATLAS. 1. Before those cruel Twins, whom at one birth Incestuous Change bore to her father Time, _50 Error and Truth, had hunted from the Earth All those bright natures which adorned its prime, And left us nothing to believe in, worth The pains of putting into learned rhyme, A lady-witch there lived on Atlas' mountain _55 Within a cavern, by a secret fountain. 2. Her mother was one of the Atlantides: The all-beholding Sun had ne'er beholden In his wide voyage o'er continents and seas So fair a creature, as she lay enfolden _60 In the warm shadow of her loveliness;-- He kissed her with his beams, and made all golden The chamber of gray rock in which she lay-- She, in that dream of joy, dissolved away. 3. 'Tis said, she first was changed into a vapour, _65 And then into a cloud, such clouds as flit, Like splendour-winged moths about a taper, Round the red west when the sun dies in it: And then into a meteor, such as caper On hill-tops when the moon is in a fit: _70 Then, into one of those mysterious stars Which hide themselves between the Earth and Mars. 4. Ten times the Mother of the Months had bent Her bow beside the folding-star, and bidden With that bright sign the billows to indent _75 The sea-deserted sand--like children chidden, At her command they ever came and went-- Since in that cave a dewy splendour hidden Took shape and motion: with the living form Of this embodied Power, the cave grew warm. _80 5. A lovely lady garmented in light From her own beauty--deep her eyes, as are Two openings of unfathomable night Seen through a Temple's cloven roof--her hair Dark--the dim brain whirls dizzy with delight. _85 Picturing her form; her soft smiles shone afar, And her low voice was heard like love, and drew All living things towards this wonder new. 6. And first the spotted cameleopard came, And then the wise and fearless elephant; _90 Then the sly serpent, in the golden flame Of his own volumes intervolved;--all gaunt And sanguine beasts her gentle looks made tame. They drank before her at her sacred fount; And every beast of beating heart grew bold, _95 Such gentleness and power even to behold. 7. The brinded lioness led forth her young, That she might teach them how they should forego Their inborn thirst of death; the pard unstrung His sinews at her feet, and sought to know _100 With looks whose motions spoke without a tongue How he might be as gentle as the doe. The magic circle of her voice and eyes All savage natures did imparadise. 8. And old Silenus, shaking a green stick _105 Of lilies, and the wood-gods in a crew Came, blithe, as in the olive copses thick Cicadae are, drunk with the noonday dew: And Dryope and Faunus followed quick, Teasing the God to sing them something new; _110 Till in this cave they found the lady lone, Sitting upon a seat of emerald stone. 9. And universal Pan, 'tis said, was there, And though none saw him,--through the adamant Of the deep mountains, through the trackless air, _115 And through those living spirits, like a want, He passed out of his everlasting lair Where the quick heart of the great world doth pant, And felt that wondrous lady all alone,-- And she felt him, upon her emerald throne. _120 10. And every nymph of stream and spreading tree, And every shepherdess of Ocean's flocks, Who drives her white waves over the green sea, And Ocean with the brine on his gray locks, And quaint Priapus with his company, _125 All came, much wondering how the enwombed rocks Could have brought forth so beautiful a birth;-- Her love subdued their wonder and their mirth. 11. The herdsmen and the mountain maidens came, And the rude kings of pastoral Garamant-- _130 Their spirits shook within them, as a flame Stirred by the air under a cavern gaunt: Pigmies, and Polyphemes, by many a name, Centaurs, and Satyrs, and such shapes as haunt Wet clefts,--and lumps neither alive nor dead, _135 Dog-headed, bosom-eyed, and bird-footed. 12. For she was beautiful--her beauty made The bright world dim, and everything beside Seemed like the fleeting image of a shade: No thought of living spirit could abide, _140 Which to her looks had ever been betrayed, On any object in the world so wide, On any hope within the circling skies, But on her form, and in her inmost eyes. 13. Which when the lady knew, she took her spindle _145 And twined three threads of fleecy mist, and three Long lines of light, such as the dawn may kindle The clouds and waves and mountains with; and she As many star-beams, ere their lamps could dwindle In the belated moon, wound skilfully; _150 And with these threads a subtle veil she wove-- A shadow for the splendour of her love. 14. The deep recesses of her odorous dwelling Were stored with magic treasures--sounds of air, Which had the power all spirits of compelling, _155 Folded in cells of crystal silence there; Such as we hear in youth, and think the feeling Will never die--yet ere we are aware, The feeling and the sound are fled and gone, And the regret they leave remains alone. _160 15. And there lay Visions swift, and sweet, and quaint, Each in its thin sheath, like a chrysalis, Some eager to burst forth, some weak and faint With the soft burthen of intensest bliss. It was its work to bear to many a saint _165 Whose heart adores the shrine which holiest is, Even Love's:--and others white, green, gray, and black, And of all shapes--and each was at her beck. 16. And odours in a kind of aviary Of ever-blooming Eden-trees she kept, _170 Clipped in a floating net, a love-sick Fairy Had woven from dew-beams while the moon yet slept; As bats at the wired window of a dairy, They beat their vans; and each was an adept, When loosed and missioned, making wings of winds, _175 To stir sweet thoughts or sad, in destined minds. 17. And liquors clear and sweet, whose healthful might Could medicine the sick soul to happy sleep, And change eternal death into a night Of glorious dreams--or if eyes needs must weep, _180 Could make their tears all wonder and delight, She in her crystal vials did closely keep: If men could drink of those clear vials, 'tis said The living were not envied of the dead. 18. Her cave was stored with scrolls of strange device, _185 The works of some Saturnian Archimage, Which taught the expiations at whose price Men from the Gods might win that happy age Too lightly lost, redeeming native vice; And which might quench the Earth-consuming rage _190 Of gold and blood--till men should live and move Harmonious as the sacred stars above; 19. And how all things that seem untameable, Not to be checked and not to be confined, Obey the spells of Wisdom's wizard skill; _195 Time, earth, and fire--the ocean and the wind, And all their shapes--and man's imperial will; And other scrolls whose writings did unbind The inmost lore of Love--let the profane Tremble to ask what secrets they contain. _200 20. And wondrous works of substances unknown, To which the enchantment of her father's power Had changed those ragged blocks of savage stone, Were heaped in the recesses of her bower; Carved lamps and chalices, and vials which shone _205 In their own golden beams--each like a flower, Out of whose depth a fire-fly shakes his light Under a cypress in a starless night. 21. At first she lived alone in this wild home, And her own thoughts were each a minister, _210 Clothing themselves, or with the ocean foam, Or with the wind, or with the speed of fire, To work whatever purposes might come Into her mind; such power her mighty Sire Had girt them with, whether to fly or run, _215 Through all the regions which he shines upon. 22. The Ocean-nymphs and Hamadryades, Oreads and Naiads, with long weedy locks, Offered to do her bidding through the seas, Under the earth, and in the hollow rocks, _220 And far beneath the matted roots of trees, And in the gnarled heart of stubborn oaks, So they might live for ever in the light Of her sweet presence--each a satellite. 23. 'This may not be,' the wizard maid replied; _225 'The fountains where the Naiades bedew Their shining hair, at length are drained and dried; The solid oaks forget their strength, and strew Their latest leaf upon the mountains wide; The boundless ocean like a drop of dew _230 Will be consumed--the stubborn centre must Be scattered, like a cloud of summer dust. 24. 'And ye with them will perish, one by one;-- If I must sigh to think that this shall be, If I must weep when the surviving Sun _235 Shall smile on your decay--oh, ask not me To love you till your little race is run; I cannot die as ye must--over me Your leaves shall glance--the streams in which ye dwell Shall be my paths henceforth, and so--farewell!'-- _240 25. She spoke and wept:--the dark and azure well Sparkled beneath the shower of her bright tears, And every little circlet where they fell Flung to the cavern-roof inconstant spheres And intertangled lines of light:--a knell _245 Of sobbing voices came upon her ears From those departing Forms, o'er the serene Of the white streams and of the forest green. 26. All day the wizard lady sate aloof, Spelling out scrolls of dread antiquity, _250 Under the cavern's fountain-lighted roof; Or broidering the pictured poesy Of some high tale upon her growing woof, Which the sweet splendour of her smiles could dye In hues outshining heaven--and ever she _255 Added some grace to the wrought poesy. 27. While on her hearth lay blazing many a piece Of sandal wood, rare gums, and cinnamon; Men scarcely know how beautiful fire is-- Each flame of it is as a precious stone _260 Dissolved in ever-moving light, and this Belongs to each and all who gaze upon. The Witch beheld it not, for in her hand She held a woof that dimmed the burning brand. 28. This lady never slept, but lay in trance _265 All night within the fountain--as in sleep. Its emerald crags glowed in her beauty's glance; Through the green splendour of the water deep She saw the constellations reel and dance Like fire-flies--and withal did ever keep _270 The tenour of her contemplations calm, With open eyes, closed feet, and folded palm. 29. And when the whirlwinds and the clouds descended From the white pinnacles of that cold hill, She passed at dewfall to a space extended, _275 Where in a lawn of flowering asphodel Amid a wood of pines and cedars blended, There yawned an inextinguishable well Of crimson fire--full even to the brim, And overflowing all the margin trim. _280 30. Within the which she lay when the fierce war Of wintry winds shook that innocuous liquor In many a mimic moon and bearded star O'er woods and lawns;--the serpent heard it flicker In sleep, and dreaming still, he crept afar-- _285 And when the windless snow descended thicker Than autumn leaves, she watched it as it came Melt on the surface of the level flame. 31. She had a boat, which some say Vulcan wrought For Venus, as the chariot of her star; _290 But it was found too feeble to be fraught With all the ardours in that sphere which are, And so she sold it, and Apollo bought And gave it to this daughter: from a car Changed to the fairest and the lightest boat _295 Which ever upon mortal stream did float. 32. And others say, that, when but three hours old, The first-born Love out of his cradle lept, And clove dun Chaos with his wings of gold, And like a horticultural adept, _300 Stole a strange seed, and wrapped it up in mould, And sowed it in his mother's star, and kept Watering it all the summer with sweet dew, And with his wings fanning it as it grew. 33. The plant grew strong and green, the snowy flower _305 Fell, and the long and gourd-like fruit began To turn the light and dew by inward power To its own substance; woven tracery ran Of light firm texture, ribbed and branching, o'er The solid rind, like a leaf's veined fan-- _310 Of which Love scooped this boat--and with soft motion Piloted it round the circumfluous ocean. 34. This boat she moored upon her fount, and lit A living spirit within all its frame, Breathing the soul of swiftness into it. _315 Couched on the fountain like a panther tame, One of the twain at Evan's feet that sit-- Or as on Vesta's sceptre a swift flame-- Or on blind Homer's heart a winged thought,-- In joyous expectation lay the boat. _320 35. Then by strange art she kneaded fire and snow Together, tempering the repugnant mass With liquid love--all things together grow Through which the harmony of love can pass; And a fair Shape out of her hands did flow-- _325 A living Image, which did far surpass In beauty that bright shape of vital stone Which drew the heart out of Pygmalion. 36. A sexless thing it was, and in its growth It seemed to have developed no defect _330 Of either sex, yet all the grace of both,-- In gentleness and strength its limbs were decked; The bosom swelled lightly with its full youth, The countenance was such as might select Some artist that his skill should never die, _335 Imaging forth such perfect purity. 37. From its smooth shoulders hung two rapid wings, Fit to have borne it to the seventh sphere, Tipped with the speed of liquid lightenings, Dyed in the ardours of the atmosphere: _340 She led her creature to the boiling springs Where the light boat was moored, and said: 'Sit here!' And pointed to the prow, and took her seat Beside the rudder, with opposing feet. 38. And down the streams which clove those mountains vast, _345 Around their inland islets, and amid The panther-peopled forests whose shade cast Darkness and odours, and a pleasure hid In melancholy gloom, the pinnace passed; By many a star-surrounded pyramid _350 Of icy crag cleaving the purple sky, And caverns yawning round unfathomably. 39. The silver noon into that winding dell, With slanted gleam athwart the forest tops, Tempered like golden evening, feebly fell; _355 A green and glowing light, like that which drops From folded lilies in which glow-worms dwell, When Earth over her face Night's mantle wraps; Between the severed mountains lay on high, Over the stream, a narrow rift of sky. _360 40. And ever as she went, the Image lay With folded wings and unawakened eyes; And o'er its gentle countenance did play The busy dreams, as thick as summer flies, Chasing the rapid smiles that would not stay, _365 And drinking the warm tears, and the sweet sighs Inhaling, which, with busy murmur vain, They had aroused from that full heart and brain. 41. And ever down the prone vale, like a cloud Upon a stream of wind, the pinnace went: _370 Now lingering on the pools, in which abode The calm and darkness of the deep content In which they paused; now o'er the shallow road Of white and dancing waters, all besprent With sand and polished pebbles:--mortal boat _375 In such a shallow rapid could not float. 42. And down the earthquaking cataracts which shiver Their snow-like waters into golden air, Or under chasms unfathomable ever Sepulchre them, till in their rage they tear _380 A subterranean portal for the river, It fled--the circling sunbows did upbear Its fall down the hoar precipice of spray, Lighting it far upon its lampless way. 43. And when the wizard lady would ascend _385 The labyrinths of some many-winding vale, Which to the inmost mountain upward tend-- She called 'Hermaphroditus!'--and the pale And heavy hue which slumber could extend Over its lips and eyes, as on the gale _390 A rapid shadow from a slope of grass, Into the darkness of the stream did pass. 44. And it unfurled its heaven-coloured pinions, With stars of fire spotting the stream below; And from above into the Sun's dominions _395 Flinging a glory, like the golden glow In which Spring clothes her emerald-winged minions, All interwoven with fine feathery snow And moonlight splendour of intensest rime, With which frost paints the pines in winter time. _400 45. And then it winnowed the Elysian air Which ever hung about that lady bright, With its aethereal vans--and speeding there, Like a star up the torrent of the night, Or a swift eagle in the morning glare _405 Breasting the whirlwind with impetuous flight, The pinnace, oared by those enchanted wings, Clove the fierce streams towards their upper springs. 46. The water flashed, like sunlight by the prow Of a noon-wandering meteor flung to Heaven; _410 The still air seemed as if its waves did flow In tempest down the mountains; loosely driven The lady's radiant hair streamed to and fro: Beneath, the billows having vainly striven Indignant and impetuous, roared to feel _415 The swift and steady motion of the keel. 47. Or, when the weary moon was in the wane, Or in the noon of interlunar night, The lady-witch in visions could not chain Her spirit; but sailed forth under the light _420 Of shooting stars, and bade extend amain Its storm-outspeeding wings, the Hermaphrodite; She to the Austral waters took her way, Beyond the fabulous Thamondocana,-- 48. Where, like a meadow which no scythe has shaven, _425 Which rain could never bend, or whirl-blast shake, With the Antarctic constellations paven, Canopus and his crew, lay the Austral lake-- There she would build herself a windless haven Out of the clouds whose moving turrets make _430 The bastions of the storm, when through the sky The spirits of the tempest thundered by: 49. A haven beneath whose translucent floor The tremulous stars sparkled unfathomably, And around which the solid vapours hoar, _435 Based on the level waters, to the sky Lifted their dreadful crags, and like a shore Of wintry mountains, inaccessibly Hemmed in with rifts and precipices gray, And hanging crags, many a cove and bay. _440 50. And whilst the outer lake beneath the lash Of the wind's scourge, foamed like a wounded thing, And the incessant hail with stony clash Ploughed up the waters, and the flagging wing Of the roused cormorant in the lightning flash _445 Looked like the wreck of some wind-wandering Fragment of inky thunder-smoke--this haven Was as a gem to copy Heaven engraven,-- 51. On which that lady played her many pranks, Circling the image of a shooting star, _450 Even as a tiger on Hydaspes' banks Outspeeds the antelopes which speediest are, In her light boat; and many quips and cranks She played upon the water, till the car Of the late moon, like a sick matron wan, _455 To journey from the misty east began. 52. And then she called out of the hollow turrets Of those high clouds, white, golden and vermilion, The armies of her ministering spirits-- In mighty legions, million after million, _460 They came, each troop emblazoning its merits On meteor flags; and many a proud pavilion Of the intertexture of the atmosphere They pitched upon the plain of the calm mere. 53. They framed the imperial tent of their great Queen _465 Of woven exhalations, underlaid With lambent lightning-fire, as may be seen A dome of thin and open ivory inlaid With crimson silk--cressets from the serene Hung there, and on the water for her tread _470 A tapestry of fleece-like mist was strewn, Dyed in the beams of the ascending moon. 54. And on a throne o'erlaid with starlight, caught Upon those wandering isles of aery dew, Which highest shoals of mountain shipwreck not, _475 She sate, and heard all that had happened new Between the earth and moon, since they had brought The last intelligence--and now she grew Pale as that moon, lost in the watery night-- And now she wept, and now she laughed outright. _480 55. These were tame pleasures; she would often climb The steepest ladder of the crudded rack Up to some beaked cape of cloud sublime, And like Arion on the dolphin's back Ride singing through the shoreless air;--oft-time _485 Following the serpent lightning's winding track, She ran upon the platforms of the wind, And laughed to hear the fire-balls roar behind. 56. And sometimes to those streams of upper air Which whirl the earth in its diurnal round, _490 She would ascend, and win the spirits there To let her join their chorus. Mortals found That on those days the sky was calm and fair, And mystic snatches of harmonious sound Wandered upon the earth where'er she passed, _495 And happy thoughts of hope, too sweet to last. 57. But her choice sport was, in the hours of sleep, To glide adown old Nilus, where he threads Egypt and Aethiopia, from the steep Of utmost Axume, until he spreads, _500 Like a calm flock of silver-fleeced sheep, His waters on the plain: and crested heads Of cities and proud temples gleam amid, And many a vapour-belted pyramid. 58. By Moeris and the Mareotid lakes, _505 Strewn with faint blooms like bridal chamber floors, Where naked boys bridling tame water-snakes, Or charioteering ghastly alligators, Had left on the sweet waters mighty wakes Of those huge forms--within the brazen doors _510 Of the great Labyrinth slept both boy and beast, Tired with the pomp of their Osirian feast. 59. And where within the surface of the river The shadows of the massy temples lie, And never are erased--but tremble ever _515 Like things which every cloud can doom to die, Through lotus-paven canals, and wheresoever The works of man pierced that serenest sky With tombs, and towers, and fanes, 'twas her delight To wander in the shadow of the night. _520 60. With motion like the spirit of that wind Whose soft step deepens slumber, her light feet Passed through the peopled haunts of humankind. Scattering sweet visions from her presence sweet, Through fane, and palace-court, and labyrinth mined _525 With many a dark and subterranean street Under the Nile, through chambers high and deep She passed, observing mortals in their sleep. 61. A pleasure sweet doubtless it was to see Mortals subdued in all the shapes of sleep. _530 Here lay two sister twins in infancy; There, a lone youth who in his dreams did weep; Within, two lovers linked innocently In their loose locks which over both did creep Like ivy from one stem;--and there lay calm _535 Old age with snow-bright hair and folded palm. 62. But other troubled forms of sleep she saw, Not to be mirrored in a holy song-- Distortions foul of supernatural awe, And pale imaginings of visioned wrong; _540 And all the code of Custom's lawless law Written upon the brows of old and young: 'This,' said the wizard maiden, 'is the strife Which stirs the liquid surface of man's life.' 63. And little did the sight disturb her soul.-- _545 We, the weak mariners of that wide lake Where'er its shores extend or billows roll, Our course unpiloted and starless make O'er its wild surface to an unknown goal:-- But she in the calm depths her way could take, _550 Where in bright bowers immortal forms abide Beneath the weltering of the restless tide. 64. And she saw princes couched under the glow Of sunlike gems; and round each temple-court In dormitories ranged, row after row, _555 She saw the priests asleep--all of one sort-- For all were educated to be so.-- The peasants in their huts, and in the port The sailors she saw cradled on the waves, And the dead lulled within their dreamless graves. _560 65. And all the forms in which those spirits lay Were to her sight like the diaphanous Veils, in which those sweet ladies oft array Their delicate limbs, who would conceal from us Only their scorn of all concealment: they _565 Move in the light of their own beauty thus. But these and all now lay with sleep upon them, And little thought a Witch was looking on them. 66. She, all those human figures breathing there, Beheld as living spirits--to her eyes _570 The naked beauty of the soul lay bare, And often through a rude and worn disguise She saw the inner form most bright and fair-- And then she had a charm of strange device, Which, murmured on mute lips with tender tone, _575 Could make that spirit mingle with her own. 67. Alas! Aurora, what wouldst thou have given For such a charm when Tithon became gray? Or how much, Venus, of thy silver heaven Wouldst thou have yielded, ere Proserpina _580 Had half (oh! why not all?) the debt forgiven Which dear Adonis had been doomed to pay, To any witch who would have taught you it? The Heliad doth not know its value yet. 68. 'Tis said in after times her spirit free _585 Knew what love was, and felt itself alone-- But holy Dian could not chaster be Before she stooped to kiss Endymion, Than now this lady--like a sexless bee Tasting all blossoms, and confined to none, _590 Among those mortal forms, the wizard-maiden Passed with an eye serene and heart unladen. 69. To those she saw most beautiful, she gave Strange panacea in a crystal bowl:-- They drank in their deep sleep of that sweet wave, _595 And lived thenceforward as if some control, Mightier than life, were in them; and the grave Of such, when death oppressed the weary soul, Was as a green and overarching bower Lit by the gems of many a starry flower. _600 70. For on the night when they were buried, she Restored the embalmers' ruining, and shook The light out of the funeral lamps, to be A mimic day within that deathy nook; And she unwound the woven imagery _605 Of second childhood's swaddling bands, and took The coffin, its last cradle, from its niche, And threw it with contempt into a ditch. 71. And there the body lay, age after age. Mute, breathing, beating, warm, and undecaying, _610 Like one asleep in a green hermitage, With gentle smiles about its eyelids playing, And living in its dreams beyond the rage Of death or life; while they were still arraying In liveries ever new, the rapid, blind _615 And fleeting generations of mankind. 72. And she would write strange dreams upon the brain Of those who were less beautiful, and make All harsh and crooked purposes more vain Than in the desert is the serpent's wake _620 Which the sand covers--all his evil gain The miser in such dreams would rise and shake Into a beggar's lap;--the lying scribe Would his own lies betray without a bribe. 73. The priests would write an explanation full, _625 Translating hieroglyphics into Greek, How the God Apis really was a bull, And nothing more; and bid the herald stick The same against the temple doors, and pull The old cant down; they licensed all to speak _630 Whate'er they thought of hawks, and cats, and geese, By pastoral letters to each diocese. 74. The king would dress an ape up in his crown And robes, and seat him on his glorious seat, And on the right hand of the sunlike throne _635 Would place a gaudy mock-bird to repeat The chatterings of the monkey.--Every one Of the prone courtiers crawled to kiss the feet Of their great Emperor, when the morning came, And kissed--alas, how many kiss the same! _640 75. The soldiers dreamed that they were blacksmiths, and Walked out of quarters in somnambulism; Round the red anvils you might see them stand Like Cyclopses in Vulcan's sooty abysm, Beating their swords to ploughshares;--in a band _645 The gaolers sent those of the liberal schism Free through the streets of Memphis, much, I wis, To the annoyance of king Amasis. 76. And timid lovers who had been so coy, They hardly knew whether they loved or not, _650 Would rise out of their rest, and take sweet joy, To the fulfilment of their inmost thought; And when next day the maiden and the boy Met one another, both, like sinners caught, Blushed at the thing which each believed was done _655 Only in fancy--till the tenth moon shone; 77. And then the Witch would let them take no ill: Of many thousand schemes which lovers find, The Witch found one,--and so they took their fill Of happiness in marriage warm and kind. _660 Friends who, by practice of some envious skill, Were torn apart--a wide wound, mind from mind!-- She did unite again with visions clear Of deep affection and of truth sincere. 80. These were the pranks she played among the cities _665 Of mortal men, and what she did to Sprites And Gods, entangling them in her sweet ditties To do her will, and show their subtle sleights, I will declare another time; for it is A tale more fit for the weird winter nights _670 Than for these garish summer days, when we Scarcely believe much more than we can see. End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Who did the Witch want to have reveal their own lies? Only give me the answer and do not output any other words.
Answer:
[ "The scribe." ]
276
7,970
single_doc_qa
8cd051e7a88d492aa3e9f33734b17637
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Two-stage Pipeline for Multilingual Dialect Detection Publish Date: Unkown Author List: Ankit Vaidya (from Pune Institute of Computer Technology), Aditya Kane (from Pune Institute of Computer Technology) Figure Figure 1: Class distribution of dialects Figure 2: System diagram for dialect classification.The LID classifies the input into one of 3 languages.The sample is then further classified into dialects by language specific models. Figure 3: Confusion matrix of 9-way classification.Note that rows are normalized according to the number of samples is that class. Our complete results for Track-1 using the two-stage dialect detection pipeline.Model-* denotes the language of the models used for the experiments. Performance on Track-1 validation dataset of individual models used in the two-stage pipeline."Lg" stands for language of the model used. Comparative results of two-way classification using the finetuned (F.T.) predictions and predictions adapted from three-way classification models. abstract Dialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 DSL-TL shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively. Our proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly 1 . Introduction Language has been the primary mode of communication for humans since the pre-historic ages. Studies have explored the evolution of language and outlined mathematical models that govern the intricacies of natural language . Inevitably, as humans established civilization in various parts of the world, this language was modified by, and for the group of people occupied by that particular geographical region. This gave rise to multiple national dialects of the same language. The VarDial workshop (colocated with EACL 2023) explores various dialects and variations of the same language. We participated in the Discriminating Between Similar Languages -True Labels (DSL-TL) shared task. In this task, the participants were provided with data from three languages, with each language having three varieties. This shared task consisted of two tracks -Track-1 featuring nine-way classification and Track-2 featuring six-way classification. The second track included two particular national dialects of each language (eg. American English and British English), and the first track had one general We ranked 1 st in both of the tracks. Moreover, we beat the next best submission by a margin of 4.5% in the first task and 5.6% in the second task.We were the only team to surpass the organizer baseline scores. We present our winning solution in this paper. We used an end-to-end deep learning pipeline which consisted of a language identification model and three language-specific models, one for each language. We converged upon the best combination by doing an elaborate analysis of various models available. Furthermore, in this work we also analyze the performance of the pipeline as a whole and also provide an ablation study. Lastly, we provide some future directions in this area of research. Related Work The present literature encompasses various aspects of dialect identification. We study this from three perspectives: large language models, language identification and dialect classification problems. Large Language Models The success of transformers and BERT based models was inevitable since the initial boom of the transformer 2017) model. In recent years, many other architectures like RoBERTa and ELECTRA have further pushed the state-of-the-art in this domain. Moreover, autoregressive models like GPT and GPT-2 have also shown their prowess. Multilingual versions of RoBERTA, namely XLM-RoBERTa are also available. Lastly, language specific models like Spanish BERT (la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, 2022) and Portuguese BERT are available as well. Our winning solution makes use of these large language models trained on specific languages. Language Identification Models Many multilingual language identification models have been developed in order to classify the language of the input sentence beforehand. Even though the initial works used n-gram models and generative mixture models or even conditional random fields and other classical machine learning methods like naive bayes , modern methods have shifted to the use of deep learning for language identification . Recent works have mainly focused on deep learning based language identification, where handling codemixed data is a big challenge in the domain. For our experiments, we use a version of XLM-RoBERTa finetuned on a language identification dataset 2 . This model has near-perfect test accuracy of 99.6%. Dialect Classification Dialect classification has been previously solved using statistical methods like Gaussian Mixture Models and Frame Selection Decoding or Support Vector Machines (SVM) . It has been explored relatively sparsely, mostly in the case for local languages . Deep learning approaches have been explored in previous editions of the VarDial workshop shared tasks and otherwise . Dialect classification was also explored previously as a part of other shared tasks . We want to stress that given the multilingual nature of the dataset, using the present methods directly was not an option. In our work, although we take inspiration from the previous works, we propose a novel system that surpasses the performance of the previous systems by a large margin. Data The dataset We observed that the class PT-BR had the most number of samples (2,724) and the class EN had the least number of samples (349), and thus the imbalance ratio was almost 1:8. We have illustrated the data distribution in Figure . We tried to mitigate this imbalance using over-sampling and weighted sampling methods. However, the improved data sampling method did not affect the performance. System Description This was a problem of multi-class classification having 9 classes for Track-1 and 6 classes for Track-2. The samples were belonging to 3 languages having 3 varieties each, so the classification pipeline was made in 2 stages. The Language Identification (LID) model which is the first stage classifies the sentence into 3 languages: English (EN), Spanish (ES) and Portuguese (PT). The LID is a pretrained XLM-RoBERTa that is fine-tuned for the task of language identification. It is able to classify the input sentence into 20 languages. We classify and separate the samples according to their language. The samples corresponding to the specific languages are then fed into the language specific models for dialect identification. For dialect identification we have used models like BERT and RoBERTa with a linear layer connected to the pooler output of the models. Then fine-tuning is done on the models for dialect identification using the samples corresponding to the specific languages. For the task of dialect identification we experimented with several pretrained models like XLM-RoBERTa, BERT, ELECTRA, GPT-2 and RoBERTa. All models were fine-tuned for 20 epochs with a learning rate of 1e-6 and weight decay 1e-6 with a batch size of 8. The best performing model checkpoint was chosen according to the epoch-wise validation macro-F1 score. 5 Experiments and Results Experiments using Large Language Models For the task of Dialect Identification we have tried various language specific models like XLM-RoBERTa, BERT, ELECTRA, RoBERTa and GPT- 2. The base variant of all these models were used and all the models were used through the Hugging-Face library. The pooler output of these models was passed through a linear layer and the models were fine-tuned. First, we experimented with different models for Track-1. All the models were trained for 20 epochs with learning rate 1e-6, weight decay 1e-6 and a batch size of 8. We used XLM-RoBERTa as the baseline for all 3 languages. The best performing models for the English language were RoBERTa and BERT whereas GPT-2 was the worst performing. Similarly the language specific versions of RoBERTa and BERT performed well for the Spanish and Portuguese respectively. Overall the worst performing model was GPT-2 across all 3 languages. The validation F1 scores are present in Table . The two best-performing models for every language were chosen for Track-2. The same procedure as specified above was used and the F1 scores are present in Table . The train and validation F1 scores for 2-class classification are higher for all models as compared to the F1 score of the same models for 3-class classification. This was mainly due to the poor representation and accuracy of classification of the third class. We observed symptoms of overfitting in all models after 12-15 epochs and the best validation F1 score was obtained in the range of 4-8 epochs. LID experiments The pipeline for dialect identification is divided into two parts as the sentences in the dataset belong to different languages. The stages are described in Section 4. The XLM-RoBERTa we have used for language classification has a test accuracy of 99.6% meaning it correctly classifies all input sentences and hence, can be considered as a perfect classifier. For the final pipeline we experimented using the two best performing models for each language in Track-1 and Track-2. For both the tracks we experimented with all 8 (2 3 ) possible combinations of models and calculated the validation F1 score for the combined validation dataset which had sentences belonging to all languages. The validation scores for Track-1 and Track-2 are shown in Table and Table respectively. For both the tracks, the three pipelines with the best validation F1 scores were chosen for submission. Using 3-way classifier as a 2-way classifier In Track-1, participants are expected to train a classifier which classifies amongst 9 classes, and in Track-2, participants are expected to train a classifier which classifies amongst 6 classes. These 6 classes are a proper subset of the 9 classes from Track-1. Thus, an intuitive baseline for Track-2 is to use the model finetuned for Track-1, whilst considering only the relevant classes for the latter task. The classes EN , ES and P T , i.e. the classes without any national dialect associated with them are not included in Track-2 as compared to Track-1. Thus, we calculate the predictions for the Track-2 validation dataset using the models for Track-1 and exclude the metrics for Track-1 specific classes to get the metrics for this "adapted" 2-way classification. We show the results of this experiment in Table and observe that, as expected, the adapted 2-way classification performs worse compared to the explicitly finetuned variant. Results for Track-1 and Track-2 We now present our experiments and their performance for both tracks. Our experiments for Track-1 are described in Table and our experiments for Track-2 are described in Table . The participants were allowed three submissions for evaluation on the test set, so we submitted predictions using the three systems which performed the best on the validation set. As mentioned in Section 5.2, we performed 2 3 , i.e. a total of 8 experiments using the two best models for each language. We observed that RoBERTa base on English, Spanish BERT base on Spanish and Portuguese BERT base performed the best on the testing set for Track-1. The same combination, with RoBERTa base for English, worked best for Track-2. All of our submissions were the top submissions for each track, which surpassed the next best competitors by a margin of 4.5% and 5.6% for Track-1 and Track-2 respectively. Ablation of best submissions We hereby make some observations of our submissions and other experiments. To assist this, we plot the confusion matrices of our best submissions for Track-1 and Track-2 in Figures respectively. Note that these confusion matrices have their rows (i.e. true labels axes) normalized according to the number of samples in the class. Here are observations from our experiments: 1. BERT-based models outperform other models across all languages: We observe that BERT-based models outperform ELECTRA-based and GPT-2-based models, as shown in Table . We speculate this is because of the inherent architecture of BERT, which combines semantic learning with knowledge retention. This combination of traits is particularly useful for this task. 2. Common labels perform the worst across all languages: We observe that the common labels EN , ES and P T perform the worst, both in the individual as well as the two-stage setup. We hypothesize this is because of the absence of dialect specific words, or words that are specific to the geographical origin of the national dialect (for example, "Yankees" for EN-US and "Oxford" for EN-GB). 3. English models work better than models of other languages: It can be noted from Figures 4 and 3 that the English models have the best performance across all classes. This can be attributed to two reasons: absence of national dialect specific words and lesser pretraining data in the case of Portuguese. 4. British English is most correctly classified class: We can observe that the Spanish or Portuguese models make equal number of mistakes in the case of either national dialect, in the case of Track-2 (see Figure ). However, in the case of English, the label EN − GB is correctly classified for more than 95% of the cases. We speculate this is because British English involves slightly distinctive grammar and semantics, which help the model separate it from other classes. 5. The proposed 2-step method is scalable for multiple language dialect classification: We can strongly assert that the novel 2-step deep learning method for multilingual dialect classification is a scalable method for the task due to two specific reasons: firstly, the multilingual models (like XLM-RoBERTa) might not have the vocabulary as well as the learning capabilities to learn the minute differences between individual dialects. Secondly, this system can be quickly expanded for a new language by simply adding a language specific dialect classifier, provided the language identification model supports that particular language. Conclusion In this paper we propose a two-stage classification pipeline for dialect identification for multilingual corpora. We conduct thorough ablations on this setup and provide valuable insights. We foresee multiple future directions for this work. The first is to expand this work to many languages and dialects. Secondly, it is a worthwhile research direction to distill this multi-model setup into a single model with multiple prediction heads. The obvious limitation of this system is the excessive memory consumption due to the usage of language specific models. For low resource languages this system is difficult to train and scale. We hope that these problems will be addressed by researchers in future works. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the score achieved by the authors for Track-2? Only give me the answer and do not output any other words.
Answer:
[ "85.61%." ]
276
3,132
single_doc_qa
282f5fcf2ae74c51bb67afd6b01983c1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.<ref name="nytimes">Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008.</ref> Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives. In 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. Early life and education Born graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead. She then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the "Outstanding Senior" award and graduated as valedictorian of the class of 1964. Legal career Immediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz. Born's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice. Born was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first "Women and the Law" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench. During her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court. In 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated. In July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC). Born and the OTC derivatives market Born was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a "legal uncertainty" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would "stifle financial innovation" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies. In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, "I thought that LTCM was exactly what I had been worried about". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked "How many more failures do you think we'd have to have before some regulation in this area might be appropriate?" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that "the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: "Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did." Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999. The derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008. Born declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: "The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been." She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms. An October 2009 Frontline documentary titled "The Warning" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: "I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience." In 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. According to Caroline Kennedy, "Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right." One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated "I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know", adding that "I could have done much better. I could have made a difference" in response to her warnings. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk. Personal life Born is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time. References External links Attorney profile at Arnold & Porter Brooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video Profile at MarketsWiki Speeches and statements "Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market", before the House Committee On Banking And Financial Services, July 24, 1998. "The Lessons of Long Term Capital Management L.P.", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998. Interview: Brooksley Born for "PBS Frontline: The Warning", PBS, (streaming VIDEO 1 hour), October 20, 2009. Articles Manuel Roig-Franzia. "Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On", The Washington Post, May 26, 2009 Taibbi, Matt. "The Great American Bubble Machine", Rolling Stone'', July 9–23, 2009 1940 births American women lawyers Arnold & Porter people Clinton administration personnel Columbus School of Law faculty Commodity Futures Trading Commission personnel Heads of United States federal agencies Lawyers from San Francisco Living people Stanford Law School alumni 21st-century American women Stanford University alumni. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Who was Brooksley Elizabeth's first husband? Only give me the answer and do not output any other words.
Answer:
[ "Jacob C. Landau." ]
276
2,761
single_doc_qa
a8a7586612c94fa28a3d1eee72f076b8
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Conventional automatic speech recognition (ASR) systems typically consist of several independently learned components: an acoustic model to predict context-dependent sub-phoneme states (senones) from audio, a graph structure to map senones to phonemes, and a pronunciation model to map phonemes to words. Hybrid systems combine hidden Markov models to model state dependencies with neural networks to predict states BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Newer approaches such as end-to-end (E2E) systems reduce the overall complexity of the final system. Our research builds on prior work that has explored using time-delay neural networks (TDNN), other forms of convolutional neural networks, and Connectionist Temporal Classification (CTC) loss BIBREF4 , BIBREF5 , BIBREF6 . We took inspiration from wav2letter BIBREF6 , which uses 1D-convolution layers. Liptchinsky et al. BIBREF7 improved wav2letter by increasing the model depth to 19 convolutional layers and adding Gated Linear Units (GLU) BIBREF8 , weight normalization BIBREF9 and dropout. By building a deeper and larger capacity network, we aim to demonstrate that we can match or outperform non end-to-end models on the LibriSpeech and 2000hr Fisher+Switchboard tasks. Like wav2letter, our architecture, Jasper, uses a stack of 1D-convolution layers, but with ReLU and batch normalization BIBREF10 . We find that ReLU and batch normalization outperform other activation and normalization schemes that we tested for convolutional ASR. As a result, Jasper's architecture contains only 1D convolution, batch normalization, ReLU, and dropout layers – operators highly optimized for training and inference on GPUs. It is possible to increase the capacity of the Jasper model by stacking these operations. Our largest version uses 54 convolutional layers (333M parameters), while our small model uses 34 (201M parameters). We use residual connections to enable this level of depth. We investigate a number of residual options and propose a new residual connection topology we call Dense Residual (DR). Integrating our best acoustic model with a Transformer-XL BIBREF11 language model allows us to obtain new state-of-the-art (SOTA) results on LibriSpeech BIBREF12 test-clean of 2.95% WER and SOTA results among end-to-end models on LibriSpeech test-other. We show competitive results on Wall Street Journal (WSJ), and 2000hr Fisher+Switchboard (F+S). Using only greedy decoding without a language model we achieve 3.86% WER on LibriSpeech test-clean. This paper makes the following contributions: Jasper Architecture Jasper is a family of end-to-end ASR models that replace acoustic and pronunciation models with a convolutional neural network. Jasper uses mel-filterbank features calculated from 20ms windows with a 10ms overlap, and outputs a probability distribution over characters per frame. Jasper has a block architecture: a Jasper INLINEFORM0 x INLINEFORM1 model has INLINEFORM2 blocks, each with INLINEFORM3 sub-blocks. Each sub-block applies the following operations: a 1D-convolution, batch norm, ReLU, and dropout. All sub-blocks in a block have the same number of output channels. Each block input is connected directly into the last sub-block via a residual connection. The residual connection is first projected through a 1x1 convolution to account for different numbers of input and output channels, then through a batch norm layer. The output of this batch norm layer is added to the output of the batch norm layer in the last sub-block. The result of this sum is passed through the activation function and dropout to produce the output of the sub-block. The sub-block architecture of Jasper was designed to facilitate fast GPU inference. Each sub-block can be fused into a single GPU kernel: dropout is not used at inference-time and is eliminated, batch norm can be fused with the preceding convolution, ReLU clamps the result, and residual summation can be treated as a modified bias term in this fused operation. All Jasper models have four additional convolutional blocks: one pre-processing and three post-processing. See Figure FIGREF7 and Table TABREF8 for details. We also build a variant of Jasper, Jasper Dense Residual (DR). Jasper DR follows DenseNet BIBREF15 and DenseRNet BIBREF16 , but instead of having dense connections within a block, the output of a convolution block is added to the inputs of all the following blocks. While DenseNet and DenseRNet concatenates the outputs of different layers, Jasper DR adds them in the same way that residuals are added in ResNet. As explained below, we find addition to be as effective as concatenation. Normalization and Activation In our study, we evaluate performance of models with: 3 types of normalization: batch norm BIBREF10 , weight norm BIBREF9 , and layer norm BIBREF17 3 types of rectified linear units: ReLU, clipped ReLU (cReLU), and leaky ReLU (lReLU) 2 types of gated units: gated linear units (GLU) BIBREF8 , and gated activation units (GAU) BIBREF18 All experiment results are shown in Table TABREF15 . We first experimented with a smaller Jasper5x3 model to pick the top 3 settings before training on larger Jasper models. We found that layer norm with GAU performed the best on the smaller model. Layer norm with ReLU and batch norm with ReLU came second and third in our tests. Using these 3, we conducted further experiments on a larger Jasper10x4. For larger models, we noticed that batch norm with ReLU outperformed other choices. Thus, leading us to decide on batch normalization and ReLU for our architecture. During batching, all sequences are padded to match the longest sequence. These padded values caused issues when using layer norm. We applied a sequence mask to exclude padding values from the mean and variance calculation. Further, we computed mean and variance over both the time dimension and channels similar to the sequence-wise normalization proposed by Laurent et al. BIBREF19 . In addition to masking layer norm, we additionally applied masking prior to the convolution operation, and masking the mean and variance calculation in batch norm. These results are shown in Table TABREF16 . Interestingly, we found that while masking before convolution gives a lower WER, using masks for both convolutions and batch norm results in worse performance. As a final note, we found that training with weight norm was very unstable leading to exploding activations. Residual Connections For models deeper than Jasper 5x3, we observe consistently that residual connections are necessary for training to converge. In addition to the simple residual and dense residual model described above, we investigated DenseNet BIBREF15 and DenseRNet BIBREF16 variants of Jasper. Both connect the outputs of each sub-block to the inputs of following sub-blocks within a block. DenseRNet, similar to Dense Residual, connects the output of each output of each block to the input of all following blocks. DenseNet and DenseRNet combine residual connections using concatenation whereas Residual and Dense Residual use addition. We found that Dense Residual and DenseRNet perform similarly with each performing better on specific subsets of LibriSpeech. We decided to use Dense Residual for subsequent experiments. The main reason is that due to concatenation, the growth factor for DenseNet and DenseRNet requires tuning for deeper models whereas Dense Residual simply just repeats a sub-blocks. Language Model A language model (LM) is a probability distribution over arbitrary symbol sequences INLINEFORM0 such that more likely sequences are assigned high probabilities. LMs are frequently used to condition beam search. During decoding, candidates are evaluated using both acoustic scores and LM scores. Traditional N-gram LMs have been augmented with neural LMs in recent work BIBREF20 , BIBREF21 , BIBREF22 . We experiment with statistical N-gram language models BIBREF23 and neural Transformer-XL BIBREF11 models. Our best results use acoustic and word-level N-gram language models to generate a candidate list using beam search with a width of 2048. Next, an external Transformer-XL LM rescores the final list. All LMs were trained on datasets independently from acoustic models. We show results with the neural LM in our Results section. We observed a strong correlation between the quality of the neural LM (measured by perplexity) and WER as shown in Figure FIGREF20 . NovoGrad For training, we use either Stochastic Gradient Descent (SGD) with momentum or our own NovoGrad, an optimizer similar to Adam BIBREF14 , except that its second moments are computed per layer instead of per weight. Compared to Adam, it reduces memory consumption and we find it to be more numerically stable. At each step INLINEFORM0 , NovoGrad computes the stochastic gradient INLINEFORM1 following the regular forward-backward pass. Then the second-order moment INLINEFORM2 is computed for each layer INLINEFORM3 similar to ND-Adam BIBREF27 : DISPLAYFORM0 The second-order moment INLINEFORM0 is used to re-scale gradients INLINEFORM1 before calculating the first-order moment INLINEFORM2 : DISPLAYFORM0 If L2-regularization is used, a weight decay INLINEFORM0 is added to the re-scaled gradient (as in AdamW BIBREF28 ): DISPLAYFORM0 Finally, new weights are computed using the learning rate INLINEFORM0 : DISPLAYFORM0 Using NovoGrad instead of SGD with momentum, we decreased the WER on dev-clean LibriSpeech from 4.00% to 3.64%, a relative improvement of 9% for Jasper DR 10x5. We will further analyze NovoGrad in forthcoming work. Results We evaluate Jasper across a number of datasets in various domains. In all experiments, we use dropout and weight decay as regularization. At training time, we use speed perturbation with fixed +/-10% BIBREF29 for LibriSpeech. For WSJ and Hub5'00, we use a random speed perturbation factor between [-10%, 10%] as each utterance is fed into the model. All models have been trained on NVIDIA DGX-1 in mixed precision BIBREF30 using OpenSeq2Seq BIBREF31 . Source code, training configurations, and pretrained models are available. Read Speech We evaluated the performance of Jasper on two read speech datasets: LibriSpeech and Wall Street Journal (WSJ). For LibriSpeech, we trained Jasper DR 10x5 using our NovoGrad optimizer for 400 epochs. We achieve SOTA performance on the test-clean subset and SOTA among end-to-end speech recognition models on test-other. We trained a smaller Jasper 10x3 model with SGD with momentum optimizer for 400 epochs on a combined WSJ dataset (80 hours): LDC93S6A (WSJ0) and LDC94S13A (WSJ1). The results are provided in Table TABREF29 . Conversational Speech We also evaluate the Jasper model's performance on a conversational English corpus. The Hub5 Year 2000 (Hub5'00) evaluation (LDC2002S09, LDC2005S13) is widely used in academia. It is divided into two subsets: Switchboard (SWB) and Callhome (CHM). The training data for both the acoustic and language models consisted of the 2000hr Fisher+Switchboard training data (LDC2004S13, LDC2005S13, LDC97S62). Jasper DR 10x5 was trained using SGD with momentum for 50 epochs. We compare to other models trained using the same data and report Hub5'00 results in Table TABREF31 . We obtain good results for SWB. However, there is work to be done to improve WER on harder tasks such as CHM. Conclusions We have presented a new family of neural architectures for end-to-end speech recognition. Inspired by wav2letter's convolutional approach, we build a deep and scalable model, which requires a well-designed residual topology, effective regularization, and a strong optimizer. As our architecture studies demonstrated, a combination of standard components leads to SOTA results on LibriSpeech and competitive results on other benchmarks. Our Jasper architecture is highly efficient for training and inference, and serves as a good baseline approach on top of which to explore more sophisticated regularization, data augmentation, loss functions, language models, and optimization strategies. We are interested to see if our approach can continue to scale to deeper models and larger datasets. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: what were the baselines? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "LF-MMI Attention\nSeq2Seq \nRNN-T \nChar E2E LF-MMI \nPhone E2E LF-MMI \nCTC + Gram-CTC" ]
276
2,654
single_doc_qa
5d11434ca8394005baaf25bdc6d2fe49
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Speech-to-Text translation (ST) is essential for a wide range of scenarios: for example in emergency calls, where agents have to respond emergent requests in a foreign language BIBREF0; or in online courses, where audiences and speakers use different languages BIBREF1. To tackle this problem, existing approaches can be categorized into cascaded method BIBREF2, BIBREF3, where a machine translation (MT) model translates outputs of an automatic speech recognition (ASR) system into target language, and end-to-end method BIBREF4, BIBREF5, where a single model learns acoustic frames to target word sequence mappings in one step towards the final objective of interest. Although the cascaded model remains the dominant approach due to its better performance, the end-to-end method becomes more and more popular because it has lower latency by avoiding inferences with two models and rectifies the error propagation in theory. Since it is hard to obtain a large-scale ST dataset, multi-task learning BIBREF5, BIBREF6 and pre-training techniques BIBREF7 have been applied to end-to-end ST model to leverage large-scale datasets of ASR and MT. A common practice is to pre-train two encoder-decoder models for ASR and MT respectively, and then initialize the ST model with the encoder of the ASR model and the decoder of the MT model. Subsequently, the ST model is optimized with the multi-task learning by weighing the losses of ASR, MT, and ST. This approach, however, causes a huge gap between pre-training and fine-tuning, which are summarized into three folds: Subnet Waste: The ST system just reuses the ASR encoder and the MT decoder, while discards other pre-trained subnets, such as the MT encoder. Consequently, valuable semantic information captured by the MT encoder cannot be inherited by the final ST system. Role Mismatch: The speech encoder plays different roles in pre-training and fine-tuning. The encoder is a pure acoustic model in pre-training, while it has to extract semantic and linguistic features additionally in fine-tuning, which significantly increases the learning difficulty. Non-pre-trained Attention Module: Previous work BIBREF6 trains attention modules for ASR, MT and ST respectively, hence, the attention module of ST does not benefit from the pre-training. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN), which is able to reuse all subnets in pre-training, keep the roles of subnets consistent, and pre-train the attention module. Concretely, the TCEN consists of three components, a speech encoder, a text encoder, and a target text decoder. Different from the previous work that pre-trains an encoder-decoder based ASR model, we only pre-train an ASR encoder by optimizing the Connectionist Temporal Classification (CTC) BIBREF8 objective function. In this way, the additional decoder of ASR is not required while keeping the ability to read acoustic features into the source language space by the speech encoder. Besides, the text encoder and decoder can be pre-trained on a large MT dataset. After that, we employ common used multi-task learning method to jointly learn ASR, MT and ST tasks. Compared to prior works, the encoder of TCEN is a concatenation of an ASR encoder and an MT encoder and our model does not have an ASR decoder, so the subnet waste issue is solved. Furthermore, the two encoders work at tandem, disentangling acoustic feature extraction and linguistic feature extraction, ensuring the role consistency between pre-training and fine-tuning. Moreover, we reuse the pre-trained MT attention module in ST, so we can leverage the alignment information learned in pre-training. Since the text encoder consumes word embeddings of plausible texts in MT task but uses speech encoder outputs in ST task, another question is how one guarantees the speech encoder outputs are consistent with the word embeddings. We further modify our model to achieve semantic consistency and length consistency. Specifically, (1) the projection matrix at the CTC classification layer for ASR is shared with the word embedding matrix, ensuring that they are mapped to the same latent space, and (2) the length of the speech encoder output is proportional to the length of the input frame, so it is much longer than a natural sentence. To bridge the length gap, source sentences in MT are lengthened by adding word repetitions and blank tokens to mimic the CTC output sequences. We conduct comprehensive experiments on the IWSLT18 speech translation benchmark BIBREF1, demonstrating the effectiveness of each component. Our model is significantly better than previous methods by 3.6 and 2.2 BLEU scores for the subword-level decoding and character-level decoding strategies, respectively. Our contributions are three-folds: 1) we shed light on why previous ST models cannot sufficiently utilize the knowledge learned from the pre-training process; 2) we propose a new ST model, which alleviates shortcomings in existing methods; and 3) we empirically evaluate the proposed model on a large-scale public dataset. Background ::: Problem Formulation End-to-end speech translation aims to translate a piece of audio into a target-language translation in one step. The raw speech signals are usually converted to sequences of acoustic features, e.g. Mel filterbank features. Here, we define the speech feature sequence as $\mathbf {x} = (x_1, \cdots , x_{T_x})$.The transcription and translation sequences are denoted as $\mathbf {y^{s}} = (y_1^{s}, \cdots , y_{T_s}^{s})$, and $\mathbf {y^{t}} = (y_1^{t}, \cdots , y_{T_t}^{t})$ repectively. Each symbol in $\mathbf {y^{s}}$ or $\mathbf {y^{t}}$ is an integer index of the symbol in a vocabulary $V_{src}$ or $V_{trg}$ respectively (e.g. $y^s_i=k, k\in [0, |V_{src}|-1]$). In this work, we suppose that an ASR dataset, an MT dataset, and a ST dataset are available, denoted as $\mathcal {A} = \lbrace (\mathbf {x_i}, \mathbf {y^{s}_i})\rbrace _{i=0}^I$, $\mathcal {M} =\lbrace (\mathbf {y^{s}_j}, \mathbf {y^{t}_j})\rbrace _{j=0}^J$ and $ \mathcal {S} =\lbrace (\mathbf {x_l}, \mathbf {y^{t}_l})\rbrace _{l=0}^L$ respectively. Given a new piece of audio $\mathbf {x}$, our goal is to learn an end to end model to generate a translation sentence $\mathbf {y^{t}}$ without generating an intermediate result $\mathbf {y^{s}}$. Background ::: Multi-Task Learning and Pre-training for ST To leverage large scale ASR and MT data, multi-task learning and pre-training techniques are widely employed to improve the ST system. As shown in Figure FIGREF4, there are three popular multi-task strategies for ST, including 1) one-to-many setting, in which a speech encoder is shared between ASR and ST tasks; 2) many-to-one setting in which a decoder is shared between MT and ST tasks; and 3) many-to-many setting where both the encoder and decoder are shared. A many-to-many multi-task model contains two encoders as well as two decoders. It can be jointly trained on ASR, MT, and ST tasks. As the attention module is task-specific, three attentions are defined. Usually, the size of $\mathcal {A}$ and $\mathcal {M}$ is much larger than $\mathcal {S}$. Therefore, the common training practice is to pre-train the model on ASR and MT tasks and then fine-tune it with a multi-task learning manner. However, as aforementioned, this method suffers from subnet waste, role mismatch and non-pre-trained attention issues, which severely limits the end-to-end ST performance. Our method In this section, we first introduce the architecture of TCEN, which consists of two encoders connected in tandem, and one decoder with an attention module. Then we give the pre-training and fine-tuning strategy for TCEN. Finally, we propose our solutions for semantic and length inconsistency problems, which are caused by multi-task learning. Our method ::: TCEN Architecture Figure FIGREF5 sketches the overall architecture of TCEN, including a speech encoder $enc_s$, a text encoder $enc_t$ and a decoder $dec$ with an attention module $att$. During training, the $enc_s$ acts like an acoustic model which reads the input $\mathbf {x}$ to word or subword representations $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into hidden representations $\mathbf {h^t}$. Finally, the $dec$ defines a distribution probability over target words. The advantage of our architecture is that two encoders disentangle acoustic feature extraction and linguistic feature extraction, making sure that valuable knowledge learned from ASR and MT tasks can be effectively leveraged for ST training. Besides, every module in pre-training can be utilized in fine-tuning, alleviating the subnet waste problem. Follow BIBREF9 inaguma2018speech, we use CNN-BiLSTM architecture to build our model. Specifically, the input features $\mathbf {x}$ are organized as a sequence of feature vectors in length $T_x$. Then, $\mathbf {x}$ is passed into a stack of two convolutional layers followed by max-pooling: where $\mathbf {v}^{(l-1)}$ is feature maps in last layer and $\mathbf {W}^{(l)}$ is the filter. The max-pooling layers downsample the sequence in length by a total factor of four. The down-sampled feature sequence is further fed into a stack of five bidirectional $d$-dimensional LSTM layers: where $[;]$ denotes the vector concatenation. The final output representation from the speech encoder is denoted as $\mathbf {h^s}=(h^s_1, \cdots , h^s_{\frac{T_x}{4}})$, where $h_i^s \in \mathbb {R}^d$. The text encoder $enc_t$ consists of two bidirectional LSTM layers. In ST task, $enc_t$ accepts speech encoder output $\mathbf {h}^s$ as input. While in MT, $enc_t$ consumes the word embedding representation $\mathbf {e^s}$ derived from $\mathbf {y^s}$, where each element $e^s_i$ is computed by choosing the $y_i^s$-th vector from the source embedding matrix $W_{E^s}$. The goal of $enc_t$ is to extract high-level linguistic features like syntactic features or semantic features from lower level subword representations $\mathbf {h}^s$ or $\mathbf {e}^s$. Since $\mathbf {h}^s$ and $\mathbf {e}^s$ belong to different latent space and have different lengths, there remain semantic and length inconsistency problems. We will provide our solutions in Section SECREF21. The output sequence of $enc_t$ is denoted as $\mathbf {h}^t$. The decoder is defined as two unidirectional LSTM layers with an additive attention $att$. It predicts target sequence $\mathbf {y^{t}}$ by estimating conditional probability $P(\mathbf {y^{t}}|\mathbf {x})$: Here, $z_k$ is the the hidden state of the deocder RNN at $k$ step and $c_k$ is a time-dependent context vector computed by the attention $att$. Our method ::: Training Procedure Following previous work, we split the training procedure to pre-training and fine-tuning stages. In pre-training stage, the speech encoder $enc_s$ is trained towards CTC objective using dataset $\mathcal {A}$, while the text encoder $enc_t$ and the decoder $dec$ are trained on MT dataset $\mathcal {M}$. In fine-tuning stage, we jointly train the model on ASR, MT, and ST tasks. Our method ::: Training Procedure ::: Pre-training To sufficiently utilize the large dataset $\mathcal {A}$ and $\mathcal {M}$, the model is pre-trained on CTC-based ASR task and MT task in the pre-training stage. For ASR task, in order to get rid of the requirement for decoder and enable the $enc_s$ to generate subword representation, we leverage connectionist temporal classification (CTC) BIBREF8 loss to train the speech encoder. Given an input $\mathbf {x}$, $enc_s$ emits a sequence of hidden vectors $\mathbf {h^s}$, then a softmax classification layer predicts a CTC path $\mathbf {\pi }$, where $\pi _t \in V_{src} \cup $ {`-'} is the observing label at particular RNN step $t$, and `-' is the blank token representing no observed labels: where $W_{ctc} \in \mathbb {R}^{d \times (|V_{src}|+1)}$ is the weight matrix in the classification layer and $T$ is the total length of encoder RNN. A legal CTC path $\mathbf {\pi }$ is a variation of the source transcription $\mathbf {y}^s$ by allowing occurrences of blank tokens and repetitions, as shown in Table TABREF14. For each transcription $\mathbf {y}$, there exist many legal CTC paths in length $T$. The CTC objective trains the model to maximize the probability of observing the golden sequence $\mathbf {y}^s$, which is calculated by summing the probabilities of all possible legal paths: where $\Phi _T(y)$ is the set of all legal CTC paths for sequence $\mathbf {y}$ with length $T$. The loss can be easily computed using forward-backward algorithm. More details about CTC are provided in supplementary material. For MT task, we use the cross-entropy loss as the training objective. During training, $\mathbf {y^s}$ is converted to embedding vectors $\mathbf {e^s}$ through embedding layer $W_{E^s}$, then $enc_t$ consumes $\mathbf {e^s}$ and pass the output $\mathbf {h^t}$ to decoder. The objective function is defined as: Our method ::: Training Procedure ::: Fine-tune In fine-tune stage, we jointly update the model on ASR, MT, and ST tasks. The training for ASR and MT follows the same process as it was in pre-training stage. For ST task, the $enc_s$ reads the input $\mathbf {x}$ and generates $\mathbf {h^s}$, then $enc_t$ learns high-level linguistic knowledge into $\mathbf {h^t}$. Finally, the $dec$ predicts the target sentence. The ST loss function is defined as: Following the update strategy proposed by BIBREF11 luong2015multi, we allocate a different training ratio $\alpha _i$ for each task. When switching between tasks, we select randomly a new task $i$ with probability $\frac{\alpha _i}{\sum _{j}\alpha _{j}}$. Our method ::: Subnet-Consistency Our model keeps role consistency between pre-training and fine-tuning by connecting two encoders for ST task. However, this leads to some new problems: 1) The text encoder consumes $\mathbf {e^s}$ during MT training, while it accepts $\mathbf {h^s}$ during ST training. However, $\mathbf {e^s}$ and $\mathbf {h^s}$ may not follow the same distribution, resulting in the semantic inconsistency. 2) Besides, the length of $\mathbf {h^s}$ is not the same order of magnitude with the length of $\mathbf {e^s}$, resulting in the length inconsistency. In response to the above two challenges, we propose two countermeasures: 1) We share weights between CTC classification layer and source-end word embedding layer during training of ASR and MT, encouraging $\mathbf {e^s}$ and $\mathbf {h^s}$ in the same space. 2)We feed the text encoder source sentences in the format of CTC path, which are generated from a seq2seq model, making it more robust toward long inputs. Our method ::: Subnet-Consistency ::: Semantic Consistency As shown in Figure FIGREF5, during multi-task training, two different hidden features will be fed into the text encoder $enc_t$: the embedding representation $\mathbf {e}^s$ in MT task, and the $enc_s$ output $\mathbf {h^s}$ in ST task. Without any regularization, they may belong to different latent spaces. Due to the space gap, the $enc_t$ has to compromise between two tasks, limiting its performance on individual tasks. To bridge the space gap, our idea is to pull $\mathbf {h^s}$ into the latent space where $\mathbf {e}^s$ belong. Specifically, we share the weight $W_{ctc}$ in CTC classification layer with the source embedding weights $W_{E^s}$, which means $W_{ctc} = W_{E^s}$. In this way, when predicting the CTC path $\mathbf {\pi }$, the probability of observing the particular label $w_i \in V_{src}\cup ${`-'} at time step $t$, $p(\pi _t=w_i|\mathbf {x})$, is computed by normalizing the product of hidden vector $h_t^s$ and the $i$-th vector in $W_{E^s}$: The loss function closes the distance between $h^s_t$ and golden embedding vector, encouraging $\mathbf {h}^s$ have the same distribution with $\mathbf {e}^s$. Our method ::: Subnet-Consistency ::: Length Consistency Another existing problem is length inconsistency. The length of the sequence $\mathbf {h^s}$ is proportional to the length of the input frame $\mathbf {x}$, which is much longer than the length of $\mathbf {e^s}$. To solve this problem, we train an RNN-based seq2seq model to transform normal source sentences to noisy sentences in CTC path format, and replace standard MT with denoising MT for multi-tasking. Specifically, we first train a CTC ASR model based on dataset $\mathcal {A} = \lbrace (\mathbf {x}_i, \mathbf {y}^s_i)\rbrace _{i=0}^{I}$, and generate a CTC-path $\mathbf {\pi }_i$ for each audio $\mathbf {x}_i$ by greedy decoding. Then we define an operation $S(\cdot )$, which converts a CTC path $\mathbf {\pi }$ to a sequence of the unique tokens $\mathbf {u}$ and a sequence of repetition times for each token $\mathbf {l}$, denoted as $S(\mathbf {\pi }) = (\mathbf {u}, \mathbf {l})$. Notably, the operation is reversible, meaning that $S^{-1} (\mathbf {u}, \mathbf {l})=\mathbf {\pi }$. We use the example $\mathbf {\pi _1}$ in Table TABREF14 and show the corresponding $\mathbf {u}$ and $\mathbf {l}$ in Table TABREF24. Then we build a dataset $\mathcal {P} = \lbrace (\mathbf {y^s}_i, \mathbf {u}_i, \mathbf {l}_i)\rbrace _{i=0}^{I}$ by decoding all the audio pieces in $\mathcal {A}$ and transform the resulting path by the operation $S(\cdot )$. After that, we train a seq2seq model, as shown in Figure FIGREF25, which takes $ \mathbf {y^s}_i$ as input and decodes $\mathbf {u}_i, \mathbf {l}_i$ as outputs. With the seq2seq model, a noisy MT dataset $\mathcal {M}^{\prime }=\lbrace (\mathbf {\pi }_l, \mathbf {y^t}_l)\rbrace _{l=0}^{L}$ is obtained by converting every source sentence $\mathbf {y^s}_i \in \mathcal {M}$ to $\mathbf {\pi _i}$, where $\mathbf {\pi }_i = S^{-1}(\mathbf {u}_i, \mathbf {l}_i)$. We did not use the standard seq2seq model which takes $\mathbf {y^s}$ as input and generates $\mathbf {\pi }$ directly, since there are too many blank tokens `-' in $\mathbf {\pi }$ and the model tends to generate a long sequence with only blank tokens. During MT training, we randomly sample text pairs from $\mathcal {M}^{\prime }$ and $\mathcal {M}$ according to a hyper-parameter $k$. After tuning on the validation set, about $30\%$ pairs are sampled from $\mathcal {M}^{\prime }$. In this way, the $enc_t$ is more robust toward the longer inputs given by the $enc_s$. Experiments We conduct experiments on the IWSLT18 speech translation task BIBREF1. Since IWSLT participators use different data pre-processing methods, we reproduce several competitive baselines based on the ESPnet BIBREF12 for a fair comparison. Experiments ::: Dataset ::: Speech translation data: The organizer provides a speech translation corpus extracting from the TED talk (ST-TED), which consists of raw English wave files, English transcriptions, and aligned German translations. The corpus contains 272 hours of English speech with 171k segments. We split 2k segments from the corpus as dev set and tst2010, tst2013, tst2014, tst2015 are used as test sets. Speech recognition data: Aside from ST-TED, TED-LIUM2 corpus BIBREF13 is provided as speech recognition data, which contains 207 hours of English speech and 93k transcript sentences. Text translation data: We use transcription and translation pairs in the ST-TED corpus and WIT3 as in-domain MT data, which contains 130k and 200k sentence pairs respectively. WMT2018 is used as out-of-domain training data which consists of 41M sentence pairs. Data preprocessing: For speech data, the utterances are segmented into multiple frames with a 25 ms window size and a 10 ms step size. Then we extract 80-channel log-Mel filter bank and 3-dimensional pitch features using Kaldi BIBREF14, resulting in 83-dimensional input features. We normalize them by the mean and the standard deviation on the whole training set. The utterances with more than 3000 frames are discarded. The transcripts in ST-TED are in true-case with punctuation while in TED-LIUM2, transcripts are in lower-case and unpunctuated. Thus, we lowercase all the sentences and remove the punctuation to keep consistent. To increase the amount of training data, we perform speed perturbation on the raw signals with speed factors 0.9 and 1.1. For the text translation data, sentences longer than 80 words or shorter than 10 words are removed. Besides, we discard pairs whose length ratio between source and target sentence is smaller than 0.5 or larger than 2.0. Word tokenization is performed using the Moses scripts and both English and German words are in lower-case. We use two different sets of vocabulary for our experiments. For the subword experiments, both English and German vocabularies are generated using sentencepiece BIBREF15 with a fixed size of 5k tokens. BIBREF9 inaguma2018speech show that increasing the vocabulary size is not helpful for ST task. For the character experiments, both English and German sentences are represented in the character level. For evaluation, we segment each audio with the LIUM SpkDiarization tool BIBREF16 and then perform MWER segmentation with RWTH toolkit BIBREF17. We use lowercase BLEU as evaluation metric. Experiments ::: Baseline Models and Implementation We compare our method with following baselines. Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus. Pre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data. Multi-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\alpha _{st}=0.75$ while $\alpha _{asr}=0.25$ or $\alpha _{mt}=0.25$. For many-to-many setting, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$.. For MT task, we use only in-domain data. Many-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation. Their model solves the subnet waste issue by concatenating an ST decoder to an ASR encoder-decoder model. Notably, their ST decoder can consume representations from the speech encoder as well as the ASR decoder. For a fair comparison, the speech encoder and the ASR decoder are initialized from the pre-trained ASR model. The Triangle model is fine-tuned under their multi-task manner. All our baselines as well as TCEN are implemented based on ESPnet BIBREF12, the RNN size is set as $d=1024$ for all models. We use a dropout of 0.3 for embeddings and encoders, and train using Adadelta with initial learning rate of 1.0 for a maximum of 10 epochs. For training of TCEN, we set $\alpha _{asr}=0.2$ and $\alpha _{mt}=0.8$ in the pre-training stage, since the MT dataset is much larger than ASR dataset. For fine-tune, we use $\alpha _{st}=0.6, \alpha _{asr}=0.2$ and $\alpha _{mt}=0.2$, same as the `many-to-many' baseline. For testing, we select the model with the best accuracy on speech translation task on dev set. At inference time, we use a beam size of 10, and the beam scores include length normalization with a weight of 0.2. Experiments ::: Experimental Results Table TABREF29 shows the results on four test sets as well as the average performance. Our method significantly outperforms the strong `many-to-many+pretrain' baseline by 3.6 and 2.2 BLEU scores respectively, indicating the proposed method is very effective that substantially improves the translation quality. Besides, both pre-training and multi-task learning can improve translation quality, and the pre-training settings (2nd-4th rows) are more effective compared to multi-task settings (5th-8th rows). We observe a performance degradation in the `triangle+pretrain' baseline. Compared to our method, where the decoder receives higher-level syntactic and semantic linguistic knowledge extracted from text encoder, their ASR decoder can only provide lower word-level linguistic information. Besides, since their model lacks text encoder and the architecture of ST decoder is different from MT decoder, their model cannot utilize the large-scale MT data in all the training stages. Interestingly, we find that the char-level models outperform the subword-level models in all settings, especially in vanilla baseline. A similar phenomenon is observed by BIBREF6 berard2018end. A possible explanation is that learning the alignments between speech frames and subword units in another language is notoriously difficult. Our method can bring more gains in the subword setting since our model is good at learning the text-to-text alignment and the subword-level alignment is more helpful to the translation quality. Experiments ::: Discussion ::: Ablation Study To better understand the contribution of each component, we perform an ablation study on subword-level experiments. The results are shown in Table TABREF37. In `-MT noise' setting, we do not add noise to source sentences for MT. In `-weight sharing' setting, we use different parameters in CTC classification layer and source embedding layer. These two experiments prove that both weight sharing and using noisy MT input benefit to the final translation quality. Performance degrades more in `-weight sharing', indicating the semantic consistency contributes more to our model. In the `-pretrain' experiment, we remove the pre-training stage and directly update the model on three tasks, leading to a dramatic decrease on BLEU score, indicating the pre-training is an indispensable step for end-to-end ST. Experiments ::: Discussion ::: Learning Curve It is interesting to investigate why our method is superior to baselines. We find that TCEN achieves a higher final result owing to a better start-point in fine-tuning. Figure FIGREF39 provides learning curves of subword accuracy on validation set. The x-axis denotes the fine-tuning training steps. The vanilla model starts at a low accuracy, because its networks are not pre-trained on the ASR and MT data. The trends of our model and `many-to-many+pretrain' are similar, but our model outperforms it about five points in the whole fine-tuning process. It indicates that the gain comes from bridging the gap between pre-training and fine-tuning rather than a better fine-tuning process. Experiments ::: Discussion ::: Compared with a Cascaded System Table TABREF29 compares our model with end-to-end baselines. Here, we compare our model with cascaded systems. We build a cascaded system by combining the ASR model and MT model used in pre-training baseline. Word error rate (WER) of the ASR system and BLEU score of the MT system are reported in the supplementary material. In addition to a simple combination of the ASR and MT systems, we also re-segment the ASR outputs before feeding to the MT system, denoted as cascaded+re-seg. Specifically, we train a seq2seq model BIBREF19 on the MT dataset, where the source side is a no punctuation sentence and the target side is a natural sentence. After that, we use the seq2seq model to add sentence boundaries and punctuation on ASR outputs. Experimental results are shown in Table TABREF41. Our end-to-end model outperforms the simple cascaded model over 2 BLEU scores, and it achieves a comparable performance with the cascaded model combining with a sentence re-segment model. Related Work Early works conduct speech translation in a pipeline manner BIBREF2, BIBREF20, where the ASR output lattices are fed into an MT system to generate target sentences. HMM BIBREF21, DenseNet BIBREF22, TDNN BIBREF23 are commonly used ASR systems, while RNN with attention BIBREF19 and Transformer BIBREF10 are top choices for MT. To enhance the robustness of the NMT model towards ASR errors, BIBREF24 DBLP:conf/eacl/TsvetkovMD14 and BIBREF25 DBLP:conf/asru/ChenHHL17 propose to simulate the noise in training and inference. To avoid error propagation and high latency issues, recent works propose translating the acoustic speech into text in target language without yielding the source transcription BIBREF4. Since ST data is scarce, pre-training BIBREF7, multi-task learning BIBREF4, BIBREF6, curriculum learning BIBREF26, attention-passing BIBREF27, and knowledge distillation BIBREF28, BIBREF29 strategies have been explored to utilize ASR data and MT data. Specifically, BIBREF5 DBLP:conf/interspeech/WeissCJWC17 show improvements of performance by training the ST model jointly with the ASR and the MT model. BIBREF6 berard2018end observe faster convergence and better results due to pre-training and multi-task learning on a larger dataset. BIBREF7 DBLP:conf/naacl/BansalKLLG19 show that pre-training a speech encoder on one language can improve ST quality on a different source language. All of them follow the traditional multi-task training strategies. BIBREF26 DBLP:journals/corr/abs-1802-06003 propose to use curriculum learning to improve ST performance on syntactically distant language pairs. To effectively leverage transcriptions in ST data, BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 augment the multi-task model where the target decoder receives information from the source decoder and they show improvements on low-resource speech translation. Their model just consumes ASR and ST data, in contrast, our work sufficiently utilizes the large-scale MT data to capture the rich semantic knowledge. BIBREF30 DBLP:conf/icassp/JiaJMWCCALW19 use pre-trained MT and text-to-speech (TTS) synthesis models to convert weakly supervised data into ST pairs and demonstrate that an end-to-end MT model can be trained using only synthesised data. Conclusion This paper has investigated the end-to-end method for ST. It has discussed why there is a huge gap between pre-training and fine-tuning in previous methods. To alleviate these issues, we have proposed a method, which is capable of reusing every sub-net and keeping the role of sub-net consistent between pre-training and fine-tuning. Empirical studies have demonstrated that our model significantly outperforms baselines. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the baselines? Only give me the answer and do not output any other words.
Answer:
[ "Vanilla ST baseline, encoder pre-training, in which the ST encoder is initialized from an ASR model, decoder pre-training, in which the ST decoder is initialized from an MT model, encoder-decoder pre-training, where both the encoder and decoder are pre-trained, many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models, Triangle+pre-train: BIBREF18 DBLP:conf/naacl/AnastasopoulosC18 proposed a triangle multi-task strategy for speech translation", "Vanilla ST baseline, Pre-training baselines, Multi-task baselines, Many-to-many+pre-training, Triangle+pre-train", "Vanilla ST baseline: The vanilla ST BIBREF9 has only a speech encoder and a decoder. It is trained from scratch on the ST-TED corpus.\n\nPre-training baselines: We conduct three pre-training baseline experiments: 1) encoder pre-training, in which the ST encoder is initialized from an ASR model; 2) decoder pre-training, in which the ST decoder is initialized from an MT model; and 3) encoder-decoder pre-training, where both the encoder and decoder are pre-trained. The ASR model has the same architecture with vanilla ST model, trained on the mixture of ST-TED and TED-LIUM2 corpus. The MT model has a text encoder and decoder with the same architecture of which in TCEN. It is first trained on WMT data (out-of-domain) and then fine-tuned on in-domain data.\n\nMulti-task baselines: We also conduct three multi-task baseline experiments including one-to-many setting, many-to-one setting, and many-to-many setting. In the first two settings, we train the model with $\\alpha _{st}=0.75$ while $\\alpha _{asr}=0.25$ or $\\alpha _{mt}=0.25$. For many-to-many setting, we use $\\alpha _{st}=0.6, \\alpha _{asr}=0.2$ and $\\alpha _{mt}=0.2$.. For MT task, we use only in-domain data.\n\nMany-to-many+pre-training: We train a many-to-many multi-task model where the encoders and decoders are derived from pre-trained ASR and MT models. " ]
276
7,359
single_doc_qa
de345fa1aff040968fd2b4e6e82130a2
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson. History Early history For many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles. In 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state. 19th century From the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County. Peketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas. In 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years. In April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since. As early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood. In 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the "Golden State Route". 20th century The National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson. Geography According to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water. Adjacent counties Saline County (north) Dickinson County (northeast) Marion County (east) Harvey County (southeast) Reno County (southwest) Rice County (west) Ellsworth County (northwest) Major highways Interstate 135 U.S. Route 56 U.S. Route 81 K-4 K-61 K-153 Demographics The McPherson Micropolitan Statistical Area includes all of McPherson County. 2000 census As of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000. There were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99. In the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males. The median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over. Government Presidential elections McPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson. Laws Following amendment to the Kansas Constitution in 1986, the county remained a prohibition, or "dry", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement. Education Colleges McPherson College in McPherson Bethany College in Lindsborg Central Christian College in McPherson Unified school districts Smoky Valley USD 400 McPherson USD 418 Canton-Galva USD 419 Moundridge USD 423 Inman USD 448 School district office in neighboring county Goessel USD 411 Little River-Windom USD 444 Museums Birger Sandzén Memorial Gallery in Lindsborg McCormick-Deering Days Museum in Inman McPherson Museum in McPherson Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg Kansas Motorcycle Museum in Marquette Communities Cities Canton Galva Inman Lindsborg Marquette McPherson (county seat) Moundridge Windom Unincorporated communities † means a Census-Designated Place (CDP) by the United States Census Bureau. Conway Elyria† Groveland Johnstown New Gottland Roxbury† Ghost towns Alta Mills Battle Hill Christian Doles Park Elivon King City Sweadal Townships McPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size. See also List of people from McPherson County, Kansas National Register of Historic Places listings in McPherson County, Kansas McPherson Valley Wetlands Maxwell Wildlife Refuge References Notes Further reading Wheeler, Wayne Leland. "An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas." (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657). County Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992. McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959. Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932. A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922. Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893. Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921. Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903. Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884. Trails The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook) The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook) Mennonite Settlements Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886. Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945. External links County McPherson County - Directory of Public Officials Historical , from Hatteberg's People'' on KAKE TV news Maps McPherson County Maps: Current, Historic, KDOT Kansas Highway Maps: Current, Historic, KDOT Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society Kansas counties 1867 establishments in Kansas Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What was the name of the first white settlement in McPherson County? Only give me the answer and do not output any other words.
Answer:
[ "The first white settlement in McPherson County was Fuller's Ranch, established by Charles O. Fuller." ]
276
3,004
single_doc_qa
0201993e4375423daf30968f64627f03
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Data annotation is a major bottleneck for the application of supervised learning approaches to many problems. As a result, unsupervised methods that learn directly from unlabeled data are increasingly important. For tasks related to unsupervised syntactic analysis, discrete generative models have dominated in recent years – for example, for both part-of-speech (POS) induction BIBREF0 , BIBREF1 and unsupervised dependency parsing BIBREF2 , BIBREF3 , BIBREF4 . While similar models have had success on a range of unsupervised tasks, they have mostly ignored the apparent utility of continuous word representations evident from supervised NLP applications BIBREF5 , BIBREF6 . In this work, we focus on leveraging and explicitly representing continuous word embeddings within unsupervised models of syntactic structure. Pre-trained word embeddings from massive unlabeled corpora offer a compact way of injecting a prior notion of word similarity into models that would otherwise treat words as discrete, isolated categories. However, the specific properties of language captured by any particular embedding scheme can be difficult to control, and, further, may not be ideally suited to the task at hand. For example, pre-trained skip-gram embeddings BIBREF7 with small context window size are found to capture the syntactic properties of language well BIBREF8 , BIBREF9 . However, if our goal is to separate syntactic categories, this embedding space is not ideal – POS categories correspond to overlapping interspersed regions in the embedding space, evident in Figure SECREF4 . In our approach, we propose to learn a new latent embedding space as a projection of pre-trained embeddings (depicted in Figure SECREF5 ), while jointly learning latent syntactic structure – for example, POS categories or syntactic dependencies. To this end, we introduce a new generative model (shown in Figure FIGREF6 ) that first generates a latent syntactic representation (e.g. a dependency parse) from a discrete structured prior (which we also call the “syntax model”), then, conditioned on this representation, generates a sequence of latent embedding random variables corresponding to each word, and finally produces the observed (pre-trained) word embeddings by projecting these latent vectors through a parameterized non-linear function. The latent embeddings can be jointly learned with the structured syntax model in a completely unsupervised fashion. By choosing an invertible neural network as our non-linear projector, and then parameterizing our model in terms of the projection's inverse, we are able to derive tractable exact inference and marginal likelihood computation procedures so long as inference is tractable in the underlying syntax model. In sec:learn-with-inv we show that this derivation corresponds to an alternate view of our approach whereby we jointly learn a mapping of observed word embeddings to a new embedding space that is more suitable for the syntax model, but include an additional Jacobian regularization term to prevent information loss. Recent work has sought to take advantage of word embeddings in unsupervised generative models with alternate approaches BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 . BIBREF9 build an HMM with Gaussian emissions on observed word embeddings, but they do not attempt to learn new embeddings. BIBREF10 , BIBREF11 , and BIBREF12 extend HMM or dependency model with valence (DMV) BIBREF2 with multinomials that use word (or tag) embeddings in their parameterization. However, they do not represent the embeddings as latent variables. In experiments, we instantiate our approach using both a Markov-structured syntax model and a tree-structured syntax model – specifically, the DMV. We evaluate on two tasks: part-of-speech (POS) induction and unsupervised dependency parsing without gold POS tags. Experimental results on the Penn Treebank BIBREF13 demonstrate that our approach improves the basic HMM and DMV by a large margin, leading to the state-of-the-art results on POS induction, and state-of-the-art results on unsupervised dependency parsing in the difficult training scenario where neither gold POS annotation nor punctuation-based constraints are available. Model As an illustrative example, we first present a baseline model for Markov syntactic structure (POS induction) that treats a sequence of pre-trained word embeddings as observations. Then, we propose our novel approach, again using Markov structure, that introduces latent word embedding variables and a neural projector. Lastly, we extend our approach to more general syntactic structures. Example: Gaussian HMM We start by describing the Gaussian hidden Markov model introduced by BIBREF9 , which is a locally normalized model with multinomial transitions and Gaussian emissions. Given a sentence of length INLINEFORM0 , we denote the latent POS tags as INLINEFORM1 , observed (pre-trained) word embeddings as INLINEFORM2 , transition parameters as INLINEFORM3 , and Gaussian emission parameters as INLINEFORM4 . The joint distribution of data and latent variables factors as: DISPLAYFORM0 where INLINEFORM0 is the multinomial transition probability and INLINEFORM1 is the multivariate Gaussian emission probability. While the observed word embeddings do inform this model with a notion of word similarity – lacking in the basic multinomial HMM – the Gaussian emissions may not be sufficiently flexible to separate some syntactic categories in the complex pre-trained embedding space – for example the skip-gram embedding space as visualized in Figure SECREF4 where different POS categories overlap. Next we introduce a new approach that adds flexibility to the emission distribution by incorporating new latent embedding variables. Markov Structure with Neural Projector To flexibly model observed embeddings and yield a new representation space that is more suitable for the syntax model, we propose to cascade a neural network as a projection function, deterministically transforming the simple space defined by the Gaussian HMM to the observed embedding space. We denote the latent embedding of the INLINEFORM0 word in a sentence as INLINEFORM1 , and the neural projection function as INLINEFORM2 , parameterized by INLINEFORM3 . In the case of sequential Markov structure, our new model corresponds to the following generative process: For each time step INLINEFORM0 , [noitemsep, leftmargin=*] Draw the latent state INLINEFORM0 Draw the latent embedding INLINEFORM0 Deterministically produce embedding INLINEFORM0 The graphical model is depicted in Figure FIGREF6 . The deterministic projection can also be viewed as sampling each observation from a point mass at INLINEFORM0 . The joint distribution of our model is: DISPLAYFORM0 where INLINEFORM0 is a conditional Gaussian distribution, and INLINEFORM1 is the Dirac delta function centered at INLINEFORM2 : DISPLAYFORM0 General Structure with Neural Projector Our approach can be applied to a broad family of structured syntax models. We denote latent embedding variables as INLINEFORM0 , discrete latent variables in the syntax model as INLINEFORM1 ( INLINEFORM2 ), where INLINEFORM3 are conditioned to generate INLINEFORM4 . The joint probability of our model factors as: DISPLAYFORM0 where INLINEFORM0 represents the probability of the syntax model, and can encode any syntactic structure – though, its factorization structure will determine whether inference is tractable in our full model. As shown in Figure FIGREF6 , we focus on two syntax models for syntactic analysis in this paper. The first is Markov-structured, which we use for POS induction, and the second is DMV-structured, which we use to learn dependency parses without supervision. The marginal data likelihood of our model is: DISPLAYFORM0 While the discrete variables INLINEFORM0 can be marginalized out with dynamic program in many cases, it is generally intractable to marginalize out the latent continuous variables, INLINEFORM1 , for an arbitrary projection INLINEFORM2 in Eq. ( EQREF17 ), which means inference and learning may be difficult. In sec:opt, we address this issue by constraining INLINEFORM3 to be invertible, and show that this constraint enables tractable exact inference and marginal likelihood computation. Learning & Inference In this section, we introduce an invertibility condition for our neural projector to tackle the optimization challenge. Specifically, we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists. Invertible transformations have been explored before in independent components analysis BIBREF14 , gaussianization BIBREF15 , and deep density models BIBREF16 , BIBREF17 , BIBREF18 , for unstructured data. Here, we generalize this style of approach to structured learning, and augment it with discrete latent variables ( INLINEFORM2 ). Under the invertibility condition, we derive a learning algorithm and give another view of our approach revealed by the objective function. Then, we present the architecture of a neural projector we use in experiments: a volume-preserving invertible neural network proposed by BIBREF16 for independent components estimation. Learning with Invertibility For ease of exposition, we explain the learning algorithm in terms of Markov structure without loss of generality. As shown in Eq. ( EQREF17 ), the optimization challenge in our approach comes from the intractability of the marginalized emission factor INLINEFORM0 . If we can marginalize out INLINEFORM1 and compute INLINEFORM2 , then the posterior and marginal likelihood of our Markov-structured model can be computed with the forward-backward algorithm. We can apply Eq. ( EQREF14 ) and obtain : INLINEFORM3 By using the change of variable rule to the integration, which allows the integration variable INLINEFORM0 to be replaced by INLINEFORM1 , the marginal emission factor can be computed in closed-form when the invertibility condition is satisfied: DISPLAYFORM0 where INLINEFORM0 is a conditional Gaussian distribution, INLINEFORM1 is the Jacobian matrix of function INLINEFORM2 at INLINEFORM3 , and INLINEFORM4 represents the absolute value of its determinant. This Jacobian term is nonzero and differentiable if and only if INLINEFORM5 exists. Eq. ( EQREF19 ) shows that we can directly calculate the marginal emission distribution INLINEFORM0 . Denote the marginal data likelihood of Gaussian HMM as INLINEFORM1 , then the log marginal data likelihood of our model can be directly written as: DISPLAYFORM0 where INLINEFORM0 represents the new sequence of embeddings after applying INLINEFORM1 to each INLINEFORM2 . Eq. ( EQREF20 ) shows that the training objective of our model is simply the Gaussian HMM log likelihood with an additional Jacobian regularization term. From this view, our approach can be seen as equivalent to reversely projecting the data through INLINEFORM3 to another manifold INLINEFORM4 that is directly modeled by the Gaussian HMM, with a regularization term. Intuitively, we optimize the reverse projection INLINEFORM5 to modify the INLINEFORM6 space, making it more appropriate for the syntax model. The Jacobian regularization term accounts for the volume expansion or contraction behavior of the projection. Maximizing it can be thought of as preventing information loss. In the extreme case, the Jacobian determinant is equal to zero, which means the projection is non-invertible and thus information is being lost through the projection. Such “information preserving” regularization is crucial during optimization, otherwise the trivial solution of always projecting data to the same single point to maximize likelihood is viable. More generally, for an arbitrary syntax model the data likelihood of our approach is: DISPLAYFORM0 If the syntax model itself allows for tractable inference and marginal likelihood computation, the same dynamic program can be used to marginalize out INLINEFORM0 . Therefore, our joint model inherits the tractability of the underlying syntax model. Invertible Volume-Preserving Neural Net For the projection we can use an arbitrary invertible function, and given the representational power of neural networks they seem a natural choice. However, calculating the inverse and Jacobian of an arbitrary neural network can be difficult, as it requires that all component functions be invertible and also requires storage of large Jacobian matrices, which is memory intensive. To address this issue, several recent papers propose specially designed invertible networks that are easily trainable yet still powerful BIBREF16 , BIBREF17 , BIBREF19 . Inspired by these works, we use the invertible transformation proposed by BIBREF16 , which consists of a series of “coupling layers”. This architecture is specially designed to guarantee a unit Jacobian determinant (and thus the invertibility property). From Eq. ( EQREF22 ) we know that only INLINEFORM0 is required for accomplishing learning and inference; we never need to explicitly construct INLINEFORM1 . Thus, we directly define the architecture of INLINEFORM2 . As shown in Figure FIGREF24 , the nonlinear transformation from the observed embedding INLINEFORM3 to INLINEFORM4 represents the first coupling layer. The input in this layer is partitioned into left and right halves of dimensions, INLINEFORM5 and INLINEFORM6 , respectively. A single coupling layer is defined as: DISPLAYFORM0 where INLINEFORM0 is the coupling function and can be any nonlinear form. This transformation satisfies INLINEFORM1 , and BIBREF16 show that its Jacobian matrix is triangular with all ones on the main diagonal. Thus the Jacobian determinant is always equal to one (i.e. volume-preserving) and the invertibility condition is naturally satisfied. To be sufficiently expressive, we compose multiple coupling layers as suggested in BIBREF16 . Specifically, we exchange the role of left and right half vectors at each layer as shown in Figure FIGREF24 . For instance, from INLINEFORM0 to INLINEFORM1 the left subset INLINEFORM2 is unchanged, while from INLINEFORM3 to INLINEFORM4 the right subset INLINEFORM5 remains the same. Also note that composing multiple coupling layers does not change the volume-preserving and invertibility properties. Such a sequence of invertible transformations from the data space INLINEFORM6 to INLINEFORM7 is also called normalizing flow BIBREF20 . Experiments In this section, we first describe our datasets and experimental setup. We then instantiate our approach with Markov and DMV-structured syntax models, and report results on POS tagging and dependency grammar induction respectively. Lastly, we analyze the learned latent embeddings. Data For both POS tagging and dependency parsing, we run experiments on the Wall Street Journal (WSJ) portion of the Penn Treebank. To create the observed data embeddings, we train skip-gram word embeddings BIBREF7 that are found to capture syntactic properties well when trained with small context window BIBREF8 , BIBREF9 . Following BIBREF9 , the dimensionality INLINEFORM0 is set to 100, and the training context window size is set to 1 to encode more syntactic information. The skip-gram embeddings are trained on the one billion word language modeling benchmark dataset BIBREF21 in addition to the WSJ corpus. General Experimental Setup For the neural projector, we employ rectified networks as coupling function INLINEFORM0 following BIBREF16 . We use a rectified network with an input layer, one hidden layer, and linear output units, the number of hidden units is set to the same as the number of input units. The number of coupling layers are varied as 4, 8, 16 for both tasks. We optimize marginal data likelihood directly using Adam BIBREF22 . For both tasks in the fully unsupervised setting, we do not tune the hyper-parameters using supervised data. Unsupervised POS tagging For unsupervised POS tagging, we use a Markov-structured syntax model in our approach, which is a popular structure for unsupervised tagging tasks BIBREF9 , BIBREF10 . Following existing literature, we train and test on the entire WSJ corpus (49208 sentences, 1M tokens). We use 45 tag clusters, the number of POS tags that appear in WSJ corpus. We train the discrete HMM and the Gaussian HMM BIBREF9 as baselines. For the Gaussian HMM, mean vectors of Gaussian emissions are initialized with the empirical mean of all word vectors with an additive noise. We assume diagonal covariance matrix for INLINEFORM0 and initialize it with the empirical variance of the word vectors. Following BIBREF9 , the covariance matrix is fixed during training. The multinomial probabilities are initialized as INLINEFORM1 , where INLINEFORM2 . For our approach, we initialize the syntax model and Gaussian parameters with the pre-trained Gaussian HMM. The weights of layers in the rectified network are initialized from a uniform distribution with mean zero and a standard deviation of INLINEFORM3 , where INLINEFORM4 is the input dimension. We evaluate the performance of POS tagging with both Many-to-One (M-1) accuracy BIBREF23 and V-Measure (VM) BIBREF24 . Given a model we found that the tagging performance is well-correlated with the training data likelihood, thus we use training data likelihood as a unsupervised criterion to select the trained model over 10 random restarts after training 50 epochs. We repeat this process 5 times and report the mean and standard deviation of performance. We compare our approach with basic HMM, Gaussian HMM, and several state-of-the-art systems, including sophisticated HMM variants and clustering techniques with hand-engineered features. The results are presented in Table TABREF32 . Through the introduced latent embeddings and additional neural projection, our approach improves over the Gaussian HMM by 5.4 points in M-1 and 5.6 points in VM. Neural HMM (NHMM) BIBREF10 is a baseline that also learns word representation jointly. Both their basic model and extended Conv version does not outperform the Gaussian HMM. Their best model incorporates another LSTM to model long distance dependency and breaks the Markov assumption, yet our approach still achieves substantial improvement over it without considering more context information. Moreover, our method outperforms the best published result that benefits from hand-engineered features BIBREF27 by 2.0 points on VM. We found that most tagging errors happen in noun subcategories. Therefore, we do the one-to-one mapping between gold POS tags and induced clusters and plot the normalized confusion matrix of noun subcategories in Figure FIGREF35 . The Gaussian HMM fails to identify “NN” and “NNS” correctly for most cases, and it often recognizes “NNPS” as “NNP”. In contrast, our approach corrects these errors well. Unsupervised Dependency Parsing without gold POS tags For the task of unsupervised dependency parse induction, we employ the Dependency Model with Valence (DMV) BIBREF2 as the syntax model in our approach. DMV is a generative model that defines a probability distribution over dependency parse trees and syntactic categories, generating tokens and dependencies in a head-outward fashion. While, traditionally, DMV is trained using gold POS tags as observed syntactic categories, in our approach, we treat each tag as a latent variable, as described in sec:general-neural. Most existing approaches to this task are not fully unsupervised since they rely on gold POS tags following the original experimental setup for DMV. This is partially because automatically parsing from words is difficult even when using unsupervised syntactic categories BIBREF29 . However, inducing dependencies from words alone represents a more realistic experimental condition since gold POS tags are often unavailable in practice. Previous work that has trained from words alone often requires additional linguistic constraints (like sentence internal boundaries) BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , acoustic cues BIBREF33 , additional training data BIBREF4 , or annotated data from related languages BIBREF34 . Our approach is naturally designed to train on word embeddings directly, thus we attempt to induce dependencies without using gold POS tags or other extra linguistic information. Like previous work we use sections 02-21 of WSJ corpus as training data and evaluate on section 23, we remove punctuations and train the models on sentences of length INLINEFORM0 , “head-percolation” rules BIBREF39 are applied to obtain gold dependencies for evaluation. We train basic DMV, extended DMV (E-DMV) BIBREF35 and Gaussian DMV (which treats POS tag as unknown latent variables and generates observed word embeddings directly conditioned on them following Gaussian distribution) as baselines. Basic DMV and E-DMV are trained with Viterbi EM BIBREF40 on unsupervised POS tags induced from our Markov-structured model described in sec:pos. Multinomial parameters of the syntax model in both Gaussian DMV and our model are initialized with the pre-trained DMV baseline. Other parameters are initialized in the same way as in the POS tagging experiment. The directed dependency accuracy (DDA) is used for evaluation and we report accuracy on sentences of length INLINEFORM1 and all lengths. We train the parser until training data likelihood converges, and report the mean and standard deviation over 20 random restarts. Our model directly observes word embeddings and does not require gold POS tags during training. Thus, results from related work trained on gold tags are not directly comparable. However, to measure how these systems might perform without gold tags, we run three recent state-of-the-art systems in our experimental setting: UR-A E-DMV BIBREF36 , Neural E-DMV BIBREF11 , and CRF Autoencoder (CRFAE) BIBREF37 . We use unsupervised POS tags (induced from our Markov-structured model) in place of gold tags. We also train basic DMV on gold tags and include several state-of-the-art results on gold tags as reference points. As shown in Table TABREF39 , our approach is able to improve over the Gaussian DMV by 4.8 points on length INLINEFORM0 and 4.8 points on all lengths, which suggests the additional latent embedding layer and neural projector are helpful. The proposed approach yields, to the best of our knowledge, state-of-the-art performance without gold POS annotation and without sentence-internal boundary information. DMV, UR-A E-DMV, Neural E-DMV, and CRFAE suffer a large decrease in performance when trained on unsupervised tags – an effect also seen in previous work BIBREF29 , BIBREF34 . Since our approach induces latent POS tags jointly with dependency trees, it may be able to learn POS clusters that are more amenable to grammar induction than the unsupervised tags. We observe that CRFAE underperforms its gold-tag counterpart substantially. This may largely be a result of the model's reliance on prior linguistic rules that become unavailable when gold POS tag types are unknown. Many extensions to DMV can be considered orthogonal to our approach – they essentially focus on improving the syntax model. It is possible that incorporating these more sophisticated syntax models into our approach may lead to further improvements. Sensitivity Analysis In the above experiments we initialize the structured syntax components with the pre-trained Gaussian or discrete baseline, which is shown as a useful technique to help train our deep models. We further study the results with fully random initialization. In the POS tagging experiment, we report the results in Table TABREF48 . While the performance with 4 layers is comparable to the pre-trained Gaussian initialization, deeper projections (8 or 16 layers) result in a dramatic drop in performance. This suggests that the structured syntax model with very deep projections is difficult to train from scratch, and a simpler projection might be a good compromise in the random initialization setting. Different from the Markov prior in POS tagging experiments, our parsing model seems to be quite sensitive to the initialization. For example, directed accuracy of our approach on sentences of length INLINEFORM0 is below 40.0 with random initialization. This is consistent with previous work that has noted the importance of careful initialization for DMV-based models such as the commonly used harmonic initializer BIBREF2 . However, it is not straightforward to apply the harmonic initializer for DMV directly in our model without using some kind of pre-training since we do not observe gold POS. We investigate the effect of the choice of pre-trained embedding on performance while using our approach. To this end, we additionally include results using fastText embeddings BIBREF41 – which, in contrast with skip-gram embeddings, include character-level information. We set the context windows size to 1 and the dimension size to 100 as in the skip-gram training, while keeping other parameters set to their defaults. These results are summarized in Table TABREF50 and Table TABREF51 . While fastText embeddings lead to reduced performance with our model, our approach still yields an improvement over the Gaussian baseline with the new observed embeddings space. Qualitative Analysis of Embeddings We perform qualitative analysis to understand how the latent embeddings help induce syntactic structures. First we filter out low-frequency words and punctuations in WSJ, and visualize the rest words (10k) with t-SNE BIBREF42 under different embeddings. We assign each word with its most likely gold POS tags in WSJ and color them according to the gold POS tags. For our Markov-structured model, we have displayed the embedding space in Figure SECREF5 , where the gold POS clusters are well-formed. Further, we present five example target words and their five nearest neighbors in terms of cosine similarity. As shown in Table TABREF53 , the skip-gram embedding captures both semantic and syntactic aspects to some degree, yet our embeddings are able to focus especially on the syntactic aspects of words, in an unsupervised fashion without using any extra morphological information. In Figure FIGREF54 we depict the learned latent embeddings with the DMV-structured syntax model. Unlike the Markov structure, the DMV structure maps a large subset of singular and plural nouns to the same overlapping region. However, two clusters of singular and plural nouns are actually separated. We inspect the two clusters and the overlapping region in Figure FIGREF54 , it turns out that the nouns in the separated clusters are words that can appear as subjects and, therefore, for which verb agreement is important to model. In contrast, the nouns in the overlapping region are typically objects. This demonstrates that the latent embeddings are focusing on aspects of language that are specifically important for modeling dependency without ever having seen examples of dependency parses. Some previous work has deliberately created embeddings to capture different notions of similarity BIBREF43 , BIBREF44 , while they use extra morphology or dependency annotations to guide the embedding learning, our approach provides a potential alternative to create new embeddings that are guided by structured syntax model, only using unlabeled text corpora. Related Work Our approach is related to flow-based generative models, which are first described in NICE BIBREF16 and have recently received more attention BIBREF17 , BIBREF19 , BIBREF18 . This relevant work mostly adopts simple (e.g. Gaussian) and fixed priors and does not attempt to learn interpretable latent structures. Another related generative model class is variational auto-encoders (VAEs) BIBREF45 that optimize a lower bound on the marginal data likelihood, and can be extended to learn latent structures BIBREF46 , BIBREF47 . Against the flow-based models, VAEs remove the invertibility constraint but sacrifice the merits of exact inference and exact log likelihood computation, which potentially results in optimization challenges BIBREF48 . Our approach can also be viewed in connection with generative adversarial networks (GANs) BIBREF49 that is a likelihood-free framework to learn implicit generative models. However, it is non-trivial for a gradient-based method like GANs to propagate gradients through discrete structures. Conclusion In this work, we define a novel generative approach to leverage continuous word representations for unsupervised learning of syntactic structure. Experiments on both POS induction and unsupervised dependency parsing tasks demonstrate the effectiveness of our proposed approach. Future work might explore more sophisticated invertible projections, or recurrent projections that jointly transform the entire input sequence. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the invertibility condition? Only give me the answer and do not output any other words.
Answer:
[ "The neural projector must be invertible.", "we constrain our neural projector with two requirements: (1) INLINEFORM0 and (2) INLINEFORM1 exists" ]
276
5,734
single_doc_qa
b968e82647f7490494e317bfe9e3e523
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Pretrained Language Models (PTLMs) such as BERT BIBREF1 have spearheaded advances on many NLP tasks. Usually, PTLMs are pretrained on unlabeled general-domain and/or mixed-domain text, such as Wikipedia, digital books or the Common Crawl corpus. When applying PTLMs to specific domains, it can be useful to domain-adapt them. Domain adaptation of PTLMs has typically been achieved by pretraining on target-domain text. One such model is BioBERT BIBREF2, which was initialized from general-domain BERT and then pretrained on biomedical scientific publications. The domain adaptation is shown to be helpful for target-domain tasks such as biomedical Named Entity Recognition (NER) or Question Answering (QA). On the downside, the computational cost of pretraining can be considerable: BioBERTv1.0 was adapted for ten days on eight large GPUs (see Table TABREF1), which is expensive, environmentally unfriendly, prohibitive for small research labs and students, and may delay prototyping on emerging domains. We therefore propose a fast, CPU-only domain-adaptation method for PTLMs: We train Word2Vec BIBREF3 on target-domain text and align the resulting word vectors with the wordpiece vectors of an existing general-domain PTLM. The PTLM thus gains domain-specific lexical knowledge in the form of additional word vectors, but its deeper layers remain unchanged. Since Word2Vec and the vector space alignment are efficient models, the process requires a fraction of the resources associated with pretraining the PTLM itself, and it can be done on CPU. In Section SECREF4, we use the proposed method to domain-adapt BERT on PubMed+PMC (the data used for BioBERTv1.0) and/or CORD-19 (Covid-19 Open Research Dataset). We improve over general-domain BERT on eight out of eight biomedical NER tasks, using a fraction of the compute cost associated with BioBERT. In Section SECREF5, we show how to quickly adapt an existing Question Answering model to text about the Covid-19 pandemic, without any target-domain Language Model pretraining or finetuning. Related work ::: The BERT PTLM For our purpose, a PTLM consists of three parts: A tokenizer $\mathcal {T}_\mathrm {LM} : \mathbb {L}^+ \rightarrow \mathbb {L}_\mathrm {LM}^+$, a wordpiece embedding function $\mathcal {E}_\mathrm {LM}: \mathbb {L}_\mathrm {LM} \rightarrow \mathbb {R}^{d_\mathrm {LM}}$ and an encoder function $\mathcal {F}_\mathrm {LM}$. $\mathbb {L}_\mathrm {LM}$ is a limited vocabulary of wordpieces. All words that are not in $\mathbb {L}_\mathrm {LM}$ are tokenized into sequences of shorter wordpieces, e.g., tachycardia becomes ta ##chy ##card ##ia. Given a sentence $S = [w_1, \ldots , w_T]$, tokenized as $\mathcal {T}_\mathrm {LM}(S) = [\mathcal {T}_\mathrm {LM}(w_1); \ldots ; \mathcal {T}_\mathrm {LM}(w_T)]$, $\mathcal {E}_\mathrm {LM}$ embeds every wordpiece in $\mathcal {T}_\mathrm {LM}(S)$ into a real-valued, trainable wordpiece vector. The wordpiece vectors of the entire sequence are stacked and fed into $\mathcal {F}_\mathrm {LM}$. Note that we consider position and segment embeddings to be a part of $\mathcal {F}_\mathrm {LM}$ rather than $\mathcal {E}_\mathrm {LM}$. In the case of BERT, $\mathcal {F}_\mathrm {LM}$ is a Transformer BIBREF4, followed by a final Feed-Forward Net. During pretraining, the Feed-Forward Net predicts the identity of masked wordpieces. When finetuning on a supervised task, it is usually replaced with a randomly initialized task-specific layer. Related work ::: Domain-adapted PTLMs Domain adaptation of PTLMs is typically achieved by pretraining on unlabeled target-domain text. Some examples of such models are BioBERT BIBREF2, which was pretrained on the PubMed and/or PubMed Central (PMC) corpora, SciBERT BIBREF5, which was pretrained on papers from SemanticScholar, ClinicalBERT BIBREF6, BIBREF7 and ClinicalXLNet BIBREF8, which were pretrained on clinical patient notes, and AdaptaBERT BIBREF9, which was pretrained on Early Modern English text. In most cases, a domain-adapted PTLM is initialized from a general-domain PTLM (e.g., standard BERT), though BIBREF5 report better results with a model that was pretrained from scratch with a custom wordpiece vocabulary. In this paper, we focus on BioBERT, as its domain adaptation corpora are publicly available. Related work ::: Word vectors Word vectors are distributed representations of words that are trained on unlabeled text. Contrary to PTLMs, word vectors are non-contextual, i.e., a word type is always assigned the same vector, regardless of context. In this paper, we use Word2Vec BIBREF3 to train word vectors. We will denote the Word2Vec lookup function as $\mathcal {E}_\mathrm {W2V} : \mathbb {L}_\mathrm {W2V} \rightarrow \mathbb {R}^{d_\mathrm {W2V}}$. Related work ::: Word vector space alignment Word vector space alignment has most frequently been explored in the context of cross-lingual word embeddings. For instance, BIBREF10 align English and Spanish Word2Vec spaces by a simple linear transformation. BIBREF11 use a related method to align cross-lingual word vectors and multilingual BERT wordpiece vectors. Method In the following, we assume access to a general-domain PTLM, as described in Section SECREF2, and a corpus of unlabeled target-domain text. Method ::: Creating new input vectors In a first step, we train Word2Vec on the target-domain corpus. In a second step, we take the intersection of $\mathbb {L}_\mathrm {LM}$ and $\mathbb {L}_\mathrm {W2V}$. In practice, the intersection mostly contains wordpieces from $\mathbb {L}_\mathrm {LM}$ that correspond to standalone words. It also contains single characters and other noise, however, we found that filtering them does not improve alignment quality. In a third step, we use the intersection to fit an unconstrained linear transformation $\mathbf {W} \in \mathbb {R}^{d_\mathrm {LM} \times d_\mathrm {W2V}}$ via least squares: Intuitively, $\mathbf {W}$ makes Word2Vec vectors “look like” the PTLM's native wordpiece vectors, just like cross-lingual alignment makes word vectors from one language “look like” word vectors from another language. In Table TABREF7 (top), we show examples of within-space and cross-space nearest neighbors after alignment. Method ::: Updating the wordpiece embedding layer Next, we redefine the wordpiece embedding layer of the PTLM. The most radical strategy would be to replace the entire layer with the aligned Word2Vec vectors: In initial experiments, this strategy led to a drop in performance, presumably because function words are not well represented by Word2Vec, and replacing them disrupts BERT's syntactic abilities. To prevent this problem, we leave existing wordpiece vectors intact and only add new ones: Method ::: Updating the tokenizer In a final step, we update the tokenizer to account for the added words. Let $\mathcal {T}_\mathrm {LM}$ be the standard BERT tokenizer, and let $\hat{\mathcal {T}}_\mathrm {LM}$ be the tokenizer that treats all words in $\mathbb {L}_\mathrm {LM} \cup \mathbb {L}_\mathrm {W2V}$ as one-wordpiece tokens, while tokenizing any other words as usual. In practice, a given word may or may not benefit from being tokenized by $\hat{\mathcal {T}}_\mathrm {LM}$ instead of $\mathcal {T}_\mathrm {LM}$. To give a concrete example, 82% of the words in the BC5CDR NER dataset that end in the suffix -ia are inside a disease entity (e.g., tachycardia). $\mathcal {T}_\mathrm {LM}$ tokenizes this word as ta ##chy ##card ##ia, thereby exposing the orthographic cue to the model. As a result, $\mathcal {T}_\mathrm {LM}$ leads to higher recall on -ia diseases. But there are many cases where wordpiece tokenization is meaningless or misleading. For instance euthymia (not a disease) is tokenized by $\mathcal {T}_\mathrm {LM}$ as e ##uth ##ym ##ia, making it likely to be classified as a disease. By contrast, $\hat{\mathcal {T}}_\mathrm {LM}$ gives euthymia a one-wordpiece representation that depends only on distributional semantics. We find that using $\hat{\mathcal {T}}_\mathrm {LM}$ improves precision on -ia diseases. To combine these complementary strengths, we use a 50/50 mixture of $\mathcal {T}_\mathrm {LM}$-tokenization and $\hat{\mathcal {T}}_\mathrm {LM}$-tokenization when finetuning the PTLM on a task. At test time, we use both tokenizers and mean-pool the outputs. Let $o(\mathcal {T}(S))$ be some output of interest (e.g., a logit), given sentence $S$ tokenized by $\mathcal {T}$. We predict: Experiment 1: Biomedical NER In this section, we use the proposed method to create GreenBioBERT, an inexpensive and environmentally friendly alternative to BioBERT. Recall that BioBERTv1.0 (biobert_v1.0_pubmed_pmc) was initialized from general-domain BERT (bert-base-cased) and pretrained on PubMed+PMC. Experiment 1: Biomedical NER ::: Domain adaptation We train Word2Vec with vector size $d_\mathrm {W2V} = d_\mathrm {LM} = 768$ on PubMed+PMC (see Appendix for details). Then, we follow the procedure described in Section SECREF3 to update the wordpiece embedding layer and tokenizer of general-domain BERT. Experiment 1: Biomedical NER ::: Finetuning We finetune GreenBioBERT on the eight publicly available NER tasks used in BIBREF2. We also do reproduction experiments with general-domain BERT and BioBERTv1.0, using the same setup as our model. We average results over eight random seeds. See Appendix for details on preprocessing, training and hyperparameters. Experiment 1: Biomedical NER ::: Results and discussion Table TABREF7 (bottom) shows entity-level precision, recall and F1. For ease of visualization, Figure FIGREF13 shows what portion of the BioBERT – BERT F1 delta is covered. We improve over general-domain BERT on all tasks with varying effect sizes. Depending on the points of reference, we cover an average 52% to 60% of the BioBERT – BERT F1 delta (54% for BioBERTv1.0, 60% for BioBERTv1.1 and 52% for our reproduction experiments). Table TABREF17 (top) shows the importance of vector space alignment: If we replace the aligned Word2Vec vectors with their non-aligned counterparts (by setting $\mathbf {W} = \mathbf {1}$) or with randomly initialized vectors, F1 drops on all tasks. Experiment 2: Covid-19 QA In this section, we use the proposed method to quickly adapt an existing general-domain QA model to an emerging target domain: Covid-19. Our baseline model is SQuADBERT (bert-large-uncased-whole-word-masking-finetuned-squad), a version of BERT that was finetuned on general-domain SQuAD BIBREF19. We evaluate on Deepset-AI Covid-QA, a SQuAD-style dataset with 1380 questions (see Appendix for details on data and preprocessing). We assume that there is no target-domain finetuning data, which is a realistic setup for a new domain. Experiment 2: Covid-19 QA ::: Domain adaptation We train Word2Vec with vector size $d_\mathrm {W2V} = d_\mathrm {LM} = 1024$ on CORD-19 (Covid-19 Open Research Dataset) and/or PubMed+PMC. The process takes less than an hour on CORD-19 and about one day on the combined corpus, again without the need for a GPU. Then, we update SQuADBERT's wordpiece embedding layer and tokenizer, as described in Section SECREF3. We refer to the resulting model as GreenCovidSQuADBERT. Experiment 2: Covid-19 QA ::: Results and discussion Table TABREF17 (bottom) shows that GreenCovidSQuADBERT outperforms general-domain SQuADBERT in all metrics. Most of the improvement can be achieved with just the small CORD-19 corpus, which is more specific to the target domain (compare “Cord-19 only” and “Cord-19+PubMed+PMC”). Conclusion As a reaction to the trend towards high-resource models, we have proposed an inexpensive, CPU-only method for domain-adapting Pretrained Language Models: We train Word2Vec vectors on target-domain data and align them with the wordpiece vector space of a general-domain PTLM. On eight biomedical NER tasks, we cover over 50% of the BioBERT – BERT F1 delta, at 5% of BioBERT's domain adaptation CO$_2$ footprint and 2% of its cloud compute cost. We have also shown how to rapidly adapt an existing BERT QA model to an emerging domain – the Covid-19 pandemic – without the need for target-domain Language Model pretraining or finetuning. We hope that our approach will benefit practitioners with limited time or resources, and that it will encourage environmentally friendlier NLP. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Word2Vec training We downloaded the PubMed, PMC and CORD-19 corpora from: https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/ [20 January 2020, 68GB raw text] https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/ [20 January 2020, 24GB raw text] https://pages.semanticscholar.org/coronavirus-research [17 April 2020, 2GB raw text] We extract all abstracts and text bodies and apply the BERT basic tokenizer (a word tokenizer that standard BERT uses before wordpiece tokenization). Then, we train CBOW Word2Vec with negative sampling. We use default parameters except for the vector size (which we set to $d_\mathrm {W2V} = d_\mathrm {LM}$). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Pretrained models General-domain BERT and BioBERTv1.0 were downloaded from: https://storage.googleapis.com/bert_models/2018_10_18/cased_L-12_H-768_A-12.zip https://github.com/naver/biobert-pretrained Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Data We downloaded the NER datasets by following instructions on https://github.com/dmis-lab/biobert#Datasets. For detailed dataset statistics, see BIBREF2. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Preprocessing We cut all sentences into chunks of 30 or fewer whitespace-tokenized words (without splitting inside labeled spans). Then, we tokenize every chunk $S$ with $\mathcal {T} = \mathcal {T}_\mathrm {LM}$ or $\mathcal {T} = \hat{\mathcal {T}}_\mathrm {LM}$ and add special tokens: Word-initial wordpieces in $\mathcal {T}(S)$ are labeled as B(egin), I(nside) or O(utside), while non-word-initial wordpieces are labeled as X(ignore). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Modeling, training and inference We follow BIBREF2's implementation (https://github.com/dmis-lab/biobert): We add a randomly initialized softmax classifier on top of the last BERT layer to predict the labels. We finetune the entire model to minimize negative log likelihood, with the standard Adam optimizer BIBREF20 and a linear learning rate scheduler (10% warmup). Like BIBREF2, we finetune on the concatenation of the training and development set. All finetuning runs were done on a GeForce Titan X GPU (12GB). Since we do not have the resources for an extensive hyperparameter search, we use defaults and recommendations from the BioBERT repository: Batch size of 32, peak learning rate of $1 \cdot 10^{-5}$, and 100 epochs. At inference time, we gather the output logits of word-initial wordpieces only. Since the number of word-initial wordpieces is the same for $\mathcal {T}_\mathrm {LM}(S)$ and $\hat{\mathcal {T}}_\mathrm {LM}(S)$, this makes mean-pooling the logits straightforward. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 1: Biomedical NER ::: Note on our reproduction experiments We found it easier to reproduce or exceed BIBREF2's results for general-domain BERT, compared to their results for BioBERTv1.0 (see Figure FIGREF13, main paper). While this may be due to hyperparameters, it suggests that BioBERTv1.0 was more strongly tuned than BERT in the original BioBERT paper. This observation does not affect our conclusions, as GreenBioBERT performs better than reproduced BERT as well. Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Pretrained model We downloaded the SQuADBERT baseline from: https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Data We downloaded the Deepset-AI Covid-QA dataset from: https://github.com/deepset-ai/COVID-QA/blob/master/data/question-answering/200423_covidQA.json [24 April 2020] At the time of writing, the dataset contains 1380 questions and gold answer spans. Every question is associated with one of 98 research papers (contexts). We treat the entire dataset as a test set. Note that there are some important differences between the dataset and SQuAD, which make the task challenging: The contexts are full documents rather than single paragraphs. Thus, the correct answer may appear several times, often with slightly different wordings. Only a single one of the occurrences is annotated as correct, e.g.: What was the prevalence of Coronavirus OC43 in community samples in Ilorin, Nigeria? 13.3% (95% CI 6.9-23.6%) # from main text (13.3%, 10/75). # from abstract SQuAD gold answers are defined as the “shortest span in the paragraph that answered the question” BIBREF19, but many Covid-QA gold answers are longer and contain non-essential context, e.g.: When was the Middle East Respiratory Syndrome Coronavirus isolated first? (MERS-CoV) was first isolated in 2012, in a 60-year-old man who died in Jeddah, KSA due to severe acute pneumonia and multiple organ failure 2012, Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Preprocessing We tokenize every question-context pair $(Q, C)$ with $\mathcal {T} = \mathcal {T}_\mathrm {LM}$ or $\mathcal {T} = \hat{\mathcal {T}}_\mathrm {LM}$, which yields $(\mathcal {T}(Q), \mathcal {T}(C))$. Since $\mathcal {T}(C)$ is usually too long to be digested in a single forward pass, we define a sliding window with width and stride $N = \mathrm {floor}(\frac{509 - |\mathcal {T}(Q)|}{2})$. At step $n$, the “active” window is between $a^{(l)}_n = nN$ and $a^{(r)}_n = \mathrm {min}(|C|, nN+N)$. The input is defined as: $p^{(l)}_n$ and $p^{(r)}_n$ are chosen such that $|X^{(n)}| = 512$, and such that the active window is in the center of the input (if possible). Inexpensive Domain Adaptation of Pretrained Language Models (Appendix) ::: Experiment 2: Covid-19 QA ::: Modeling and inference Feeding $X^{(n)}$ into the pretrained QA model yields start logits $\mathbf {h^{\prime }}^{(\mathrm {start}, n)} \in \mathbb {R}^{|X^{(n)}|}$ and end logits $\mathbf {h^{\prime }}^{(\mathrm {end},n)} \in \mathbb {R}^{|X^{(n)}|}$. We extract and concatenate the slices that correspond to the active windows of all steps: Next, we map the logits from the wordpiece level to the word level. This allows us to mean-pool the outputs of $\mathcal {T}_\mathrm {LM}$ and $\hat{\mathcal {T}}_\mathrm {LM}$ even when $|\mathcal {T}_\mathrm {LM}(C)| \ne |\hat{\mathcal {T}}_\mathrm {LM}(C)|$. Let $c_i$ be a whitespace-delimited word in $C$. Let $\mathcal {T}(C)_{j:j+|\mathcal {T}(c_i)|}$ be the corresponding wordpieces. The start and end logits of $c_i$ are derived as: Finally, we return the answer span $C_{k:k^{\prime }}$ that maximizes $o^{(\mathrm {start})}_k + o^{(\mathrm {end})}_{k^{\prime }}$, subject to the constraints that $k^{\prime }$ does not precede $k$ and the answer span is not longer than 500 characters. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which eight NER tasks did they evaluate on? Only give me the answer and do not output any other words.
Answer:
[ "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800", "BC5CDR-disease, NCBI-disease, BC5CDR-chem, BC4CHEMD, BC2GM, JNLPBA, LINNAEUS, Species-800" ]
276
4,947
single_doc_qa
d5d74bd876674e5e9296fd8108862314
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The explosion of available scientific articles in the Biomedical domain has led to the rise of Biomedical Information Extraction (BioIE). BioIE systems aim to extract information from a wide spectrum of articles including medical literature, biological literature, electronic health records, etc. that can be used by clinicians and researchers in the field. Often the outputs of BioIE systems are used to assist in the creation of databases, or to suggest new paths for research. For example, a ranked list of interacting proteins that are extracted from biomedical literature, but are not present in existing databases, can allow researchers to make informed decisions about which protein/gene to study further. Interactions between drugs are necessary for clinicians who simultaneously administer multiple drugs to their patients. A database of diseases, treatments and tests is beneficial for doctors consulting in complicated medical cases. The main problems in BioIE are similar to those in Information Extraction: This paper discusses, in each section, various methods that have been adopted to solve the listed problems. Each section also highlights the difficulty of Information Extraction tasks in the biomedical domain. This paper is intended as a primer to Biomedical Information Extraction for current NLP researchers. It aims to highlight the diversity of the various techniques from Information Extraction that have been applied in the Biomedical domain. The state of biomedical text mining is reviewed regularly. For more extensive surveys, consult BIBREF0 , BIBREF1 , BIBREF2 . Named Entity Recognition and Fact Extraction Named Entity Recognition (NER) in the Biomedical domain usually includes recognition of entities such as proteins, genes, diseases, treatments, drugs, etc. Fact extraction involves extraction of Named Entities from a corpus, usually given a certain ontology. When compared to NER in the domain of general text, the biomedical domain has some characteristic challenges: Some of the earliest systems were heavily dependent on hand-crafted features. The method proposed in BIBREF4 for recognition of protein names in text does not require any prepared dictionary. The work gives examples of diversity in protein names and lists multiple rules depending on simple word features as well as POS tags. BIBREF5 adopt a machine learning approach for NER. Their NER system extracts medical problems, tests and treatments from discharge summaries and progress notes. They use a semi-Conditional Random Field (semi-CRF) BIBREF6 to output labels over all tokens in the sentence. They use a variety of token, context and sentence level features. They also use some concept mapping features using existing annotation tools, as well as Brown clustering to form 128 clusters over the unlabelled data. The dataset used is the i2b2 2010 challenge dataset. Their system achieves an F-Score of 0.85. BIBREF7 is an incremental paper on NER taggers. It uses 3 types of word-representation techniques (Brown clustering, distributional clustering, word vectors) to improve performance of the NER Conditional Random Field tagger, and achieves marginal F-Score improvements. BIBREF8 propose a boostrapping mechanism to bootstrap biomedical ontologies using NELL BIBREF9 , which uses a coupled semi-supervised bootstrapping approach to extract facts from text, given an ontology and a small number of “seed” examples for each category. This interesting approach (called BioNELL) uses an ontology of over 100 categories. In contrast to NELL, BioNELL does not contain any relations in the ontology. BioNELL is motivated by the fact that a lot of scientific literature available online is highly reliable due to peer-review. The authors note that the algorithm used by NELL to bootstrap fails in BioNELL due to ambiguities in biomedical literature, and heavy semantic drift. One of the causes for this is that often common words such as “white”, “dad”, “arm” are used as names of genes- this can easily result in semantic drift in one iteration of the bootstrapping. In order to mitigate this, they use Pointwise Mutual Information scores for corpus level statistics, which attributes a small score to common words. In addition, in contrast to NELL, BioNELL only uses high instances as seeds in the next iteration, but adds low ranking instances to the knowledge base. Since evaluation is not possible using Mechanical Turk or a small number of experts (due to the complexity of the task), they use Freebase BIBREF10 , a knowledge base that has some biomedical concepts as well. The lexicon learned using BioNELL is used to train an NER system. The system shows a very high precision, thereby showing that BioNELL learns very few ambiguous terms. More recently, deep learning techniques have been developed to further enhance the performance of NER systems. BIBREF11 explore recurrent neural networks for the problem of NER in biomedical text. Relation Extraction In Biomedical Information Extraction, Relation Extraction involves finding related entities of many different kinds. Some of these include protein-protein interactions, disease-gene relations and drug-drug interactions. Due to the explosion of available biomedical literature, it is impossible for one person to extract relevant relations from published material. Automatic extraction of relations assists in the process of database creation, by suggesting potentially related entities with links to the source article. For example, a database of drug-drug interactions is important for clinicians who administer multiple drugs simultaneously to their patients- it is imperative to know if one drug will have an adverse effect on the other. A variety of methods have been developed for relation extractions, and are often inspired by Relation Extraction in NLP tasks. These include rule-based approaches, hand-crafted patterns, feature-based and kernel machine learning methods, and more recently deep learning architectures. Relation Extraction systems over Biomedical Corpora are often affected by noisy extraction of entities, due to ambiguities in names of proteins, genes, drugs etc. BIBREF12 was one of the first large scale Information Extraction efforts to study the feasibility of extraction of protein-protein interactions (such as “protein A activates protein B") from Biomedical text. Using 8 hand-crafted regular expressions over a fixed vocabulary, the authors were able to achieve a recall of 30% for interactions present in The Dictionary of Interacting Proteins (DIP) from abstracts in Medline. The method did not differentiate between the type of relation. The reasons for the low recall were the inconsistency in protein nomenclature, information not present in the abstract, and due to specificity of the hand-crafted patterns. On a small subset of extracted relations, they found that about 60% were true interactions between proteins not present in DIP. BIBREF13 combine sentence level relation extraction for protein interactions with corpus level statistics. Similar to BIBREF12 , they do not consider the type of interaction between proteins- only whether they interact in the general sense of the word. They also do not differentiate between genes and their protein products (which may share the same name). They use Pointwise Mutual Information (PMI) for corpus level statistics to determine whether a pair of proteins occur together by chance or because they interact. They combine this with a confidence aggregator that takes the maximum of the confidence of the extractor over all extractions for the same protein-pair. The extraction uses a subsequence kernel based on BIBREF14 . The integrated model, that combines PMI with aggregate confidence, gives the best performance. Kernel methods have widely been studied for Relation Extraction in Biomedical Literature. Common kernels used usually exploit linguistic information by utilising kernels based on the dependency tree BIBREF15 , BIBREF16 , BIBREF17 . BIBREF18 look at the extraction of diseases and their relevant genes. They use a dictionary from six public databases to annotate genes and diseases in Medline abstracts. In their work, the authors note that when both genes and diseases are correctly identified, they are related in 94% of the cases. The problem then reduces to filtering incorrect matches using the dictionary, which occurs due to false positives resulting from ambiguities in the names as well as ambiguities in abbreviations. To this end, they train a Max-Ent based NER classifier for the task, and get a 26% gain in precision over the unfiltered baseline, with a slight hit in recall. They use POS tags, expanded forms of abbreviations, indicators for Greek letters as well as suffixes and prefixes commonly used in biomedical terms. BIBREF19 adopt a supervised feature-based approach for the extraction of drug-drug interaction (DDI) for the DDI-2013 dataset BIBREF20 . They partition the data in subsets depending on the syntactic features, and train a different model for each. They use lexical, syntactic and verb based features on top of shallow parse features, in addition to a hand-crafted list of trigger words to define their features. An SVM classifier is then trained on the feature vectors, with a positive label if the drug pair interacts, and negative otherwise. Their method beats other systems on the DDI-2013 dataset. Some other feature-based approaches are described in BIBREF21 , BIBREF22 . Distant supervision methods have also been applied to relation extraction over biomedical corpora. In BIBREF23 , 10,000 neuroscience articles are distantly supervised using information from UMLS Semantic Network to classify brain-gene relations into geneExpression and otherRelation. They use lexical (bag of words, contextual) features as well as syntactic (dependency parse features). They make the “at-least one” assumption, i.e. at least one of the sentences extracted for a given entity-pair contains the relation in database. They model it as a multi-instance learning problem and adopt a graphical model similar to BIBREF24 . They test using manually annotated examples. They note that the F-score achieved are much lesser than that achieved in the general domain in BIBREF24 , and attribute to generally poorer performance of NER tools in the biomedical domain, as well as less training examples. BIBREF25 explore distant supervision methods for protein-protein interaction extraction. More recently, deep learning methods have been applied to relation extraction in the biomedical domain. One of the main advantages of such methods over traditional feature or kernel based learning methods is that they require minimal feature engineering. In BIBREF26 , skip-gram vectors BIBREF27 are trained over 5.6Gb of unlabelled text. They use these vectors to extract protein-protein interactions by converting them into features for entities, context and the entire sentence. Using an SVM for classification, their method is able to outperform many kernel and feature based methods over a variety of datasets. BIBREF28 follow a similar method by using word vectors trained on PubMed articles. They use it for the task of relation extraction from clinical text for entities that include problem, treatment and medical test. For a given sentence, given labelled entities, they predict the type of relation exhibited (or None) by the entity pair. These types include “treatment caused medical problem”, “test conducted to investigate medical problem”, “medical problem indicates medical problems”, etc. They use a Convolutional Neural Network (CNN) followed by feedforward neural network architecture for prediction. In addition to pre-trained word vectors as features, for each token they also add features for POS tags, distance from both the entities in the sentence, as well BIO tags for the entities. Their model performs better than a feature based SVM baseline that they train themselves. The BioNLP'16 Shared Tasks has also introduced some Relation Extraction tasks, in particular the BB3-event subtask that involves predicting whether a “lives-in” relation holds for a Bacteria in a location. Some of the top performing models for this task are deep learning models. BIBREF29 train word embeddings with six billions words of scientific texts from PubMed. They then consider the shortest dependency path between the two entities (Bacteria and location). For each token in the path, they use word embedding features, POS type embeddings and dependency type embeddings. They train a unidirectional LSTM BIBREF30 over the dependency path, that achieves an F-Score of 52.1% on the test set. BIBREF31 improve the performance by making modifications to the above model. Instead of using the shortest dependency path, they modify the parse tree based on some pruning strategies. They also add feature embeddings for each token to represent the distance from the entities in the shortest path. They then train a Bidirectional LSTM on the path, and obtain an F-Score of 57.1%. The recent success of deep learning models in Biomedical Relation Extraction that require minimal feature engineering is promising. This also suggests new avenues of research in the field. An approach as in BIBREF32 can be used to combine multi-instance learning and distant supervision with a neural architecture. Event Extraction Event Extraction in the Biomedical domain is a task that has gained more importance recently. Event Extraction goes beyond Relation Extraction. In Biomedical Event Extraction, events generally refer to a change in the state of biological molecules such as proteins and DNA. Generally, it includes detection of targeted event types such as gene expression, regulation, localisation and transcription. Each event type in addition can have multiple arguments that need to be detected. An additional layer of complexity comes from the fact that events can also be arguments of other events, giving rise to a nested structure. This helps to capture the underlying biology better BIBREF1 . Detecting the event type often involves recognising and classifying trigger words. Often, these words are verbs such as “activates”, “inhibits”, “phosphorylation” that may indicate a single, or sometimes multiple event types. In this section, we will discuss some of the successful models for Event Extraction in some detail. Event Extraction gained a lot of interest with the availability of an annotated corpus with the BioNLP'09 Shared Task on Event Extraction BIBREF34 . The task involves prediction of trigger words over nine event types such as expression, transcription, catabolism, binding, etc. given only annotation of named entities (proteins, genes, etc.). For each event, its class, trigger expression and arguments need to be extracted. Since the events can be arguments to other events, the final output in general is a graph representation with events and named entities as nodes, and edges that correspond to event arguments. BIBREF33 present a pipeline based method that is heavily dependent on dependency parsing. Their pipeline approach consists of three steps: trigger detection, argument detection and semantic post-processing. While the first two components are learning based systems, the last component is a rule based system. For the BioNLP'09 corpus, only 5% of the events span multiple sentences. Hence the approach does not get affected severely by considering only single sentences. It is important to note that trigger words cannot simply be reduced to a dictionary lookup. This is because a specific word may belong to multiple classes, or may not always be a trigger word for an event. For example, “activate” is found to not be a trigger word in over 70% of the cases. A multi-class SVM is trained for trigger detection on each token, using a large feature set consisting of semantic and syntactic features. It is interesting to note that the hyperparameters of this classifier are optimised based on the performance of the entire end-to-end system. For the second component to detect arguments, labels for edges between entities must be predicted. For the BioNLP'09 Shared Task, each directed edge from one event node to another event node, or from an event node to a named entity node are classified as “theme”, “cause”, or None. The second component of the pipeline makes these predictions independently. This is also trained using a multi-class SVM which involves heavy use of syntactic features, including the shortest dependency path between the nodes. The authors note that the precision-recall choice of the first component affects the performance of the second component: since the second component is only trained on Gold examples, any error by the first component will lead to a cascading of errors. The final component, which is a semantic post-processing step, consists of rules and heuristics to correct the output of the second component. Since the edge predictions are made independently, it is possible that some event nodes do not have any edges, or have an improper combination of edges. The rule based component corrects these and applies rules to break directed cycles in the graph, and some specific heuristics for different types of events. The final model gives a cumulative F-Score of 52% on the test set, and was the best model on the task. BIBREF35 note that previous approaches on the task suffer due to the pipeline nature and the propagation of errors. To counter this, they adopt a joint inference method based on Markov Logic Networks BIBREF36 for the same task on BioNLP'09. The Markov Logic Network jointly predicts whether each token is a trigger word, and if yes, the class it belongs to; for each dependency edge, whether it is an argument path leading to a “theme” or a “cause”. By formulating the Event Extraction problem using an MLN, the approach becomes computationally feasible and only linear in the length of the sentence. They incorporate hard constraints to encode rules such as “an argument path must have an event”, “a cause path must start with a regulation event”, etc. In addition, they also include some domain specific soft constraints as well as some linguistically-motivated context-specific soft constraints. In order to train the MLN, stochastic gradient descent was used. Certain heuristic methods are implemented in order to deal with errors due to syntactic parsing, especially ambiguities in PP-attachment and coordination. Their final system is competitive and comes very close to the system by BIBREF33 with an average F-Score of 50%. To further improve the system, they suggest leveraging additional joint-inference opportunities and integrating the syntactic parser better. Some other more recent models for Biomedical Event Extraction include BIBREF37 , BIBREF38 . Conclusion We have discussed some of the major problems and challenges in BioIE, and seen some of the diverse approaches adopted to solve them. Some interesting problems such as Pathway Extraction for Biological Systems BIBREF39 , BIBREF40 have not been discussed. Biomedical Information Extraction is a challenging and exciting field for NLP researchers that demands application of state-of-the-art methods. Traditionally, there has been a dependence on hand-crafted features or heavily feature-engineered methods. However, with the advent of deep learning methods, a lot of BioIE tasks are seeing an improvement by adopting deep learning models such as Convolutional Neural Networks and LSTMs, which require minimal feature engineering. Rapid progress in developing better systems for BioIE will be extremely helpful for clinicians and researchers in the Biomedical domain. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Does the paper explore extraction from electronic health records? Only give me the answer and do not output any other words.
Answer:
[ "Yes" ]
276
3,850
single_doc_qa
2449b672a82945b486ab4ac9d78f41c1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Ann's Mega Dub: 12/19/10 - 12/26/10 Got o have a penis to be an expert Thursday on NPR's Fresh Air, Terry Gross wanted to talk film and music. Since women don't know a thing about either and aren't interested in either, Terry had to find men who were 'experts.'This is C.I.'s " Iraq snapshot Friday, December 24, 2010. Chaos and violence continue, Nouri's incomplete Cabinet continues to receive criticism, a father offers an 'excuse' for killing his own daughter, and more.Marci Stone (US Headlines Examiner) reports, "Friday afternoon, Santa is currently in Baghdad, Iraq and on his next stop is Moscow, Russia, according to the 2010 NORAD Santa Tracker. The North American Aerospace Defense Command (NORAD) has been tracking Santa as he makes his annual journey throughout the world." Gerald Skoning (Palm Beach Post) quotes Santa saying, "We send our special wishes for peace and goodwill to all. That includes the people of Iraq, Afghanistan, Iran and North Korea." Please note that this is Santa's seventh trip to Iraq since the start of the Iraq War and, as usual, his journey was known in advance. No waiting until he hit the ground to announce he was going to Iraq -- the way George The Bully Boy Bush had to and the way US President Barack Obama still has to. In the lead up to Santa's yearly visit, many 'authorities' in Iraq began insisting that Christmas couldn't be celebrated publicly, that even Santa was banned. Gabriel Gatehouse (BBC News) quotes Shemmi Hanna stating, "I wasn't hurt but I wish that I had been killed. I wish I had become a martyr for this church, but God kept me alive for my daughters." Shemmi Hanna was in Our Lady of Salvation Church in Baghdad when it was assaulted October 31st and she lost her husband, her son, her daughter-in-law and her infant grandson in the attack. The October 31st attack marks the latest wave of violence targeting Iraqi Christians. The violence has led many to flee to northern Iraq (KRG) or to other countries. Zvi Bar'el (Haaretz) notes, "This week the Iraqi legislature discussed the Christians' situation and passed a resolution in principle to help families who fled. However, the parliament does not know where the Christians are, how many are still in Iraq, in their homes, and how many have found asylum in Iraqi Kurdistan." John Leland (New York Times) reports:The congregants on Friday night were fewer than 100, in a sanctuary built for four or five times as many. But they were determined. This year, even more than in the past, Iraqi's dwindling Christian minority had reasons to stay home for Christmas. "Yes, we are threatened, but we will not stop praying," the Rev. Meyassr al-Qaspotros told the Christmas Eve crowd at the Sacred Church of Jesus, a Chaldean Catholic church. "We do not want to leave the country because we will leave an empty space." Raheem Salman (Los Angeles Times) reports, "Rimon Metti's family will go to Christian services on Christmas Day, but his relatives will be praying for their own survival and wondering whether this is their last holiday season in Baghdad. If they had any grounds for optimism about the future of their faith in Iraq, it vanished this year amid repeated attacks on fellow believers." Shahsank Bengali (McClatchy Newspapers) adds, "Nearly two months after a shocking assault by Islamist militants, Our Lady of Salvation Catholic Church will commemorate Christmas quietly, with daytime mass and prayers for the dead, under security fit more for a prison than a house of worship. It is the same at Christian churches across Baghdad and northern Iraq, where what's left of one of the world's oldest Christian communities prepares to mark perhaps the most somber Christmas since the start of the Iraq war."Meanwhile Taylor Luck (Jordan Times) reports on Iraqi refugees in Jordan:Although the calendar will say December 25, for Theresa, Saturday will not be Christmas. There will be no cinnamon klecha cooling on the dining room table, no outdoor ceramic nativity scene, no readings of hymns with relatives. The 63-year-old Iraqi woman has even refused to put up Christmas lights in the crowded two-room Amman hotel apartment she has called home since fleeing Baghdad last month."There is no holiday spirit. All we have is fear," she said.This holiday will instead mark another year without news from her 46-year-old son, who was kidnapped outside Baghdad in late 2006.From Turkey, Sebnem Arsu (New York Times -- link has text and video) notes the increase in Iraq refugees to the country since October 31st and quotes Father Emlek stating, "I've never seen as many people coming here as I have in the last few weeks. They also go to Lebanon, Jordan and Syria but it seems that Turkey is the most popular despite the fact that they do not speak the language." Jeff Karoub (AP) reports on the small number of Iraqi refugees who have made it to the US and how some of them "struggle with insomnia, depression and anxiety."One group in Iraq who can openly celebrate Christmas are US service members who elect to. Barbara Surk (AP) reports that tomorrow Chief Warrant Officer Archie Morgan will celebrate his fourth Christmas in Iraq and Captain Diana Crane is celebrating her second Christmas in Iraq: "Crane was among several dozen troops attending a Christmas Eve mass in a chapel in Camp Victory, an American military base just outside Baghdad." Marc Hansen (Des Moines Reigster) speaks with six service members from Iowa who are stationed in Iraq. Sgt 1st Class Dennis Crosser tells Hansen, "I certainly understand from reading the paper what's going on in Afghanistan and the attention definitely needs to be on the troops there. But everyone serving here in Operation New Dawn appreciates a little bit of attention as we finish this up."Today Jiang Yu, China's Foreign Minister, issued the following statement, "We welcome and congratulate Iraq on forming a new government. We hope that the Iraqi Government unite all its people, stabilize the security situation, accelerate economic reconstruction and make new progress in building its country." James Cogan (WSWS) reports:US State Department official Philip Crowley declared on Wednesday that Washington had not "dictated the terms of the government". In reality, constant American pressure was applied to Maliki, Allawi, Kurdish leaders and other prominent Iraqi politicians throughout the entire nine-month process to form a cabinet. The US intervention included numerous personal phone calls and visits to Baghdad by both President Barack Obama and Vice President Joe Biden.The key objective of the Obama administration has been to ensure that the next Iraqi government will "request" a long-term military partnership with the US when the current Status of Forces Agreement (SOFA) expires at the end of 2011. The SOFA is the legal basis upon which some 50,000 American troops remain in Iraq, operating from large strategic air bases such as Balad and Tallil and Al Asad. US imperialism spent billions of dollars establishing these advanced bases as part of its wider strategic plans and has no intention of abandoning them.Cogan's only the second person to include the SOFA in his report. Some are impressed with the 'feat' of taking nearly ten months to form a government, stringing the country along for ten months while no decisions could go through. The editorial board of the Washington Post, for example, was full of praise yesterday. Today they're joined by Iran's Ambassador to Iraq, Hassan Danaiifar. The Tehran Times reports that Danaiifar was full of praise today hailing the "positive and final step which ended the 10-month political limbo in Iraq." However, Danaiifar was less pie-in-the-sky than the Post editorial board because he can foresee future problems as evidenced by his statement, "We may witness the emergence of some problems after one and half of a year -- for example, some ministers may be impeached." Of course, there are already many clouds on the horizon, even if Iranian diplomats and Post editorial boards can't suss them out. For example, Ben Bendig (Epoch Times) noted the objection of Iraq's female politicians to Nouri al-Maliki's decision to nominate only one woman (so far) to his Cabinet: "Some 50 female lawmakers went to the country's top leadership, the United Nations and the Arab League to voice their concern and desire for increased representation." BNO notes that protest and also that a group of Iraqi MPs are alleging that Iraqiya bought seats in the Cabinet via money exchanged in Jordan. UPI adds, "Maliki, a Shiite who has a long history of working with Tehran, has named himself acting minister of defense, interior and national security, three most powerful and sensitive posts in the government he is stitching together. Although Maliki appears to be bending over backward to accommodate rivals among Iraq's Shiite majority as well as minority Sunnis and Kurds in his administration in a spirit of reconciliation, he is unlikely to relinquish those ministries that dominate the security sector." DPA reports, "Sheikh Abdel-Mahdi al-Karbalaei, a confident of influential Shiite spiritual leader Ayatollah Ali al-Sistani, said that the new cabinet is 'below the standards' Iraqi citizens had hoped for and suggested it could prove to be weaker than the previous government." Ranj Alaaldin (Guardian) also spots clouds on the horizon:Lasting peace and stability depends on resolving outstanding disputes with the Kurds on oil, revenue-sharing, security and the disputed territories (Kirkuk in particular). The Kurds, rather than exploiting their kingmaker position to take a stronger proportion of ministries in Baghdad (they are taking just one major portfolio – the foreign ministry), are instead banking on guarantees from Maliki to implement their list of 19 demands that includes resolving the above disputes in their favour.They may have been naive, though. With their historical and federalist partners, the Islamic supreme council of Iraq in decline, the Kurds may be isolated in the new government – a government dominated by the nationalistic and centrist characteristics of the INM, the Sadrists and indeed State of Law.Maliki may, therefore, turn out to be unable to grant concessions even if he wanted to and could use Osama Nujayfi, the new ultra-nationalist speaker of parliament and Kurdish foe, to absorb the Kurdish criticism and insulate himself from any attacks.AP reports that Iraqi police sought out a 19-year-old woman because of rumors that she was working with al Qaida in Mesopotamia only to be greeted with the news that her father allegedly killed her and the father showed the police where he buried the woman . . . last month. The story begs for more than it offers. The most obvious observation is: what does it say that a woman's allegedly killed by her father and no one says a word for over a month? After that, it should probably be noted that there are many men in Iraq killing women who, no doubt, would love to also be able to pin the blame on al Qaida. In other violence, Reuters notes a house bombing in Haswa which claimed the life of Mohammed al-Karrafi, "his wife, two sons and a nephew" -- as well as injuring four more people, and a Samarra roadside bombing which claimed the lives of 2 police officers. DPA notes it was two homes bombed in Haswa and that the Samarra roadside bombing also injured four Iraqi soldiers. Jomana Karadsheh (CNN) reports, "Another policeman was wounded in Baghdad Friday night when a roadside bomb detonated by a police patrol, an Interior Ministry official told CNN."And we'll close with this from Peace Mom Cindy Sheehan's latest Al Jazeera column:The recent repeal of the US military policy of "Don't ask, don't tell" is far from being the human rights advancement some are touting it to be. I find it intellectually dishonest, in fact, illogical on any level to associate human rights with any military, let alone one that is currently dehumanising two populations as well as numerous other victims of it's clandestine "security" policies.Placing this major contention aside, the enactment of the bill might be an institutional step forward in the fight for "equality"; however institutions rarely reflect reality.Do we really think that the US congress vote to repeal the act and Obama signing the bill is going to stop the current systemic harassment of gays in the military?While I am a staunch advocate for equality of marriage and same-sex partnership, I cannot - as a peace activist - rejoice in the fact that now homosexuals can openly serve next to heterosexuals in one of the least socially responsible organisations that currently exists on earth: The US military.It is an organisation tainted with a history of intolerance towards anyone who isn't a Caucasian male from the Mid-West. Even then I'm sure plenty fitting that description have faced the terror and torment enshrined into an institution that transforms the pride and enthusiasm of youth into a narrow zeal for dominating power relations.And we'll close with this from Francis A. Boyle's "2011: Prospects for Humanity?" (Global Research):Historically, this latest eruption of American militarism at the start of the 21st Century is akin to that of America opening the 20th Century by means of the U.S.-instigated Spanish-American War in 1898. Then the Republican administration of President William McKinley stole their colonial empire from Spain in Cuba, Puerto Rico, Guam, and the Philippines; inflicted a near genocidal war against the Filipino people; while at the same time illegally annexing the Kingdom of Hawaii and subjecting the Native Hawaiian people (who call themselves the Kanaka Maoli) to near genocidal conditions. Additionally, McKinley's military and colonial expansion into the Pacific was also designed to secure America's economic exploitation of China pursuant to the euphemistic rubric of the "open door" policy. But over the next four decades America's aggressive presence, policies, and practices in the "Pacific" would ineluctably pave the way for Japan's attack at Pearl Harbor on Dec. 7, 194l, and thus America's precipitation into the ongoing Second World War. Today a century later the serial imperial aggressions launched and menaced by the Republican Bush Jr. administration and now the Democratic Obama administration are threatening to set off World War III. By shamelessly exploiting the terrible tragedy of 11 September 2001, the Bush Jr. administration set forth to steal a hydrocarbon empire from the Muslim states and peoples living in Central Asia and the Persian Gulf under the bogus pretexts of (1) fighting a war against international terrorism; and/or (2) eliminating weapons of mass destruction; and/or (3) the promotion of democracy; and/or (4) self-styled "humanitarian intervention." Only this time the geopolitical stakes are infinitely greater than they were a century ago: control and domination of two-thirds of the world's hydrocarbon resources and thus the very fundament and energizer of the global economic system – oil and gas. The Bush Jr./ Obama administrations have already targeted the remaining hydrocarbon reserves of Africa, Latin America, and Southeast Asia for further conquest or domination, together with the strategic choke-points at sea and on land required for their transportation. In this regard, the Bush Jr. administration announced the establishment of the U.S. Pentagon's Africa Command (AFRICOM) in order to better control, dominate, and exploit both the natural resources and the variegated peoples of the continent of Africa, the very cradle of our human species. This current bout of U.S. imperialism is what Hans Morgenthau denominated "unlimited imperialism" in his seminal work Politics Among Nations (4th ed. 1968, at 52-53): The outstanding historic examples of unlimited imperialism are the expansionist policies of Alexander the Great, Rome, the Arabs in the seventh and eighth centuries, Napoleon I, and Hitler. They all have in common an urge toward expansion which knows no rational limits, feeds on its own successes and, if not stopped by a superior force, will go on to the confines of the political world. This urge will not be satisfied so long as there remains anywhere a possible object of domination--a politically organized group of men which by its very independence challenges the conqueror's lust for power. It is, as we shall see, exactly the lack of moderation, the aspiration to conquer all that lends itself to conquest, characteristic of unlimited imperialism, which in the past has been the undoing of the imperialistic policies of this kind…. On 10 November 1979 I visited with Hans Morgenthau at his home in Manhattan. It proved to be our last conversation before he died on 19 July 1980. Given his weakened physical but not mental condition and his serious heart problem, at the end of our necessarily abbreviated one-hour meeting I purposefully asked him what he thought about the future of international relations. iraqbbc newsgabriel gatehousethe new york timesjohn lelandhaaretzzvi bar'elthe jordan timestaylor luckthe associated pressjeff karoubthe los angeles timesraheem salmancnnjomana karadsheh Terry thinks she's a man Yesterday on NPR's Fresh Air the hour went to a male TV critic. It's always a man with Terry. Always. And somebody tell her that a snotty, snooty TV critic really doesn't make for good programming.This is C.I.'s "Iraq snapshot:" Thursday, December 23, 2010. Chaos and violence continue, Iraqi women make clear their displeasure over the Cabinet make up, Daniel Ellsberg and Veterans for Peace get some recognition, and more. Last Thursday a protest held outside the White House. One of the organizers was Veterans for Peace and Pentagon Papers whistle blower Daniel Ellsberg participated and spoke. Juana Bordas (Washington Post) advocates for both of them to be named persons of the year: Veterans for Peace and Daniel Ellsberg should be this year's person of the year because of their courage and bravery to stand up for all of us who believe that "war is not the answer." Moreover in a time of economic recession, the war machine is bankrupting our country. As John Amidon, a Marine Corps veteran from Albany asked at the White House protest, "How is the war economy working for you?"While unemployment rates hover near 10 percent, there is no doubt that the U.S. economy and quality of life is faltering. Worldwide we are 14th in education, 37th in the World Health Organization's ranking on medical systems, and 23rd in the U.N. Environmental Sustainability Index on being most livable and greenest benefits. There is one place we take the undeniable world lead. The US military spending accounts for a whopping 46.5 percent of world military spending--the next ten countries combined come in at only 20.7 percent. Linda Pershing (Truthout) reports, "Responding to a call from the leaders of Stop These Wars(1) - a new coalition of Veterans for Peace and other activists - participants came together in a large-scale performance of civil resistance. A group of veterans under the leadership of Veterans for Peace members Tarak Kauff, Will Covert and Elaine Brower, mother of a Marine who has served three tours of duty in Iraq, sponsored the event with the explicit purpose of putting their bodies on the line. Many participants were Vietnam War veterans; others ranged from Iraq and Afghanistan war veterans in their 20s and 30s to World War II vets in their 80s and older. They were predominately white; men outnumbered women by at least three to one. After a short rally in Lafayette Park, they formed a single-file procession, walking across Pennsylvania Avenue to the solemn beat of a drum. As they reached the police barricade (erected to prevent them from chaining themselves to the gate, a plan they announced on their web site), the activists stood shoulder to shoulder, their bodies forming a human link across the 'picture postcard' tableau in front of the White House." Maria Chutchian (Arlington Advocate) quotes, participant Nate Goldshlag (Vietnam veteran) stating, ""There was a silent, single file march around Lafayette Park to a drum beat. Then we went in front of the White House,. There were barricades set up in front of white house fence. So when we got there, we jumped over barricades and were able to get right next to the White House fence." Participant Linda LeTendre (Daily Gazette) reports: At the end of the rally, before the silent, solemn procession to the White House fence, in honor of those killed in Iraq and Afghan wars of lies and deceptions, the VFP played taps and folded an American flag that had been left behind at a recent funeral for the veteran of one of those wars. Two attendees in full dress uniform held and folded the flag. I had the image of all of the people who stood along the roads and bridges when the bodies of the two local men, Benjamin Osborn and David Miller, were returned to the Capital District. I thought if all of those people were here now or spoke out against war these two fine young men might still be with us.I was blessed enough to be held in custody with one of those in uniform; a wonderful young man who had to move from his hometown in Georgia because no one understood why as a veteran he was against these wars. Even his family did not understand. (He remains in my prayers.)Our plan was to attach ourselves to the White House fence until President Obama came out and talked to us or until we were arrested and dragged away. I don't have to tell you how it ended.Mr. Ellsberg was one of 139 people arrested at that action. We've noted the protest in pretty much every snapshot since last Thursday. If something else comes out that's worth noting on the protest, we'll include it. We will not include people who don't have their facts and it's really sad when they link to, for example, Guardian articles and the links don't even back them up. It's real sad, for example, when they're trashing Hillary (big strong men that they are) and ripping her apart and yet Barack? "Obama's inaccurate statements"??? What the hell is that? You're inferring he lied, say so. Don't be such a little chicken s**t. It's especially embarrasing when you're grandstanding on 'truth.' Especially when you're the little s**t that clogged up the public e-mail account here in the summer of 2008 whining that you were holding Barack to a standard, then admitting that you weren't, then whining that if you did people would be mean to you. Oh, that's sooooooo sad. Someone might say something bad about you. The horror. You must suffer more than all the people in Iraq and Afghanistan combined. While the action took place in DC, actions also took place in other cities. We've already noted NYC's action this week, Doug Kaufmann (Party for Socialism & Liberation) reports on the Los Angeles action: Despite heavy rain, over 100 people gathered in Los Angeles on the corner of Hollywood and Highland to demand an end to the U.S. wars on Afghanistan and Iraq. People came from as far as Riverside to protest, braving what Southern California media outlets have dubbed the "storm of the decade." The demonstration, initiated and led by the ANSWER Coalition, broke the routine of holiday shopping and garnered support from activists and even passers by, who joined in chanting "Money for jobs and education -- not for war and occupation!" and "Occupation is a crime -- Iraq, Afghanistan, Palestine!" Protesters held banners reading, "U.S./NATO Out of Afghanistan!" and "Yes to jobs, housing and education -- no to war, racism and occupation!"Speakers at the demonstration included representatives of Korean Americans for Peace, ANSWER Coalition, KmB Pro-People Youth, Veterans for Peace, Party for Socialism and Liberation and National Lawyers Guild. Tuesday, Nouri al-Maliki managed to put away the political stalemate thanks to a lot of Scotch -- tape to hold the deal together and booze to keep your eyes so crossed you don't question how someone can claim to have formed a Cabinet when they've left over ten positions to be filled at a later date. One group speaking out is women. Bushra Juhi and Qassmi Abdul-Zahra (AP) report, "Iraq's female lawmakers are furious that only one member of the country's new Cabinet is a woman and are demanding better representation in a government that otherwise has been praised by the international community for bringing together the country's religious sects and political parties." As noted Tuesday, though represenation in Parliament is addressed in Iraq's Constitution, there is nothing to address women serving in the Cabinet. Aseel Kami (Reuters) notes one of the most damning aspects of Nouri's chosen men -- a man is heaing the Ministry of Women's Affairs. Iraqiya's spokesperson Maysoon Damluji states, "There are really good women who could do wel . . . they cannot be neglected and marginalized." Al-Amal's Hanaa Edwar states, "They call it a national (power) sharing government. So where is the sharing? Do they want to take us back to the era of the harem? Do they want to take us back to the dark ages, when women were used only for pleasure." Deborah Amos (NPR's All Things Considered) reports that a struggle is going on between secular impulses and fundamentalist ones. Gallery owner Qasim Sabti states, "We know it's fighting between the religious foolish man and the civilization man. We know we are fighting like Gandhi, and this is a new language in Iraqi life. We have no guns. We do not believe in this kind of fighting." Deborah Amos is the author of Eclipse of the Sunnis: Power, Exile, and Upheaval in the Middle East. Meanwhile Nizar Latif (The National) reports that distrust is a common reaction to the new government in Baghdad and quotes high school teacher Hussein Abed Mohammad stating, "Promises were made that trustworthy, competent people would be ministers this time around, but it looks as if everything has just been divided out according to sectarian itnerests. No attention has been paid to forming a functioning government, it is just a political settlement of vested interests. I'm sure al Maliki will have the same problems in his next four years as he had in the last four years." Days away from the ten months mark, Nouri managed to finally end the stalemate. Some try to make sense of it and that must have been some office party that the editorial board of the Washington Post is still coming down from judging by "A good year in Iraq." First up, meet the new Iraqi Body Count -- an organization that provides cover for the war and allows supporters of the illegal war to point to it and insist/slur "Things aren't so bad!" Sure enough, the editorial board of the Post does just that noting the laughable "civilian deaths" count at iCasualities. As we noted -- long, long before we walked away from that crap ass website, they're not doing a civilian count. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What does the new Iraqi Body Count organization do? Only give me the answer and do not output any other words.
Answer:
[ "It provides cover for the war and allows supporters of the illegal war to point to it." ]
276
5,667
single_doc_qa
6ecb08dec03a41eeba7c8624597b85d8
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences. In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models. In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data. Related Work Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content. At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output. Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora. Simplified Corpora We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K. Text Simplification using Neural Machine Translation Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence. The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 . State INLINEFORM0 is calculated by DISPLAYFORM0 where INLINEFORM0 is the activation function GRU. The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding. Synthetic Simplified Sentences We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system. Evaluation We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. Conclusion In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: what language does this paper focus on? Only give me the answer and do not output any other words.
Answer:
[ "English", "Simple English" ]
276
3,094
single_doc_qa
8815c3ab8dc0455889a3884d820c6b2b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M BIBREF2 , contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions BIBREF2 data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1 (b), for some KBQA tasks like WebQuestions BIBREF0 , we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. Background: Different Granularity in KB Relations Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. (1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s. (2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”. The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach. Improved KB Relation Detection This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. Relation Representations from Different Granularity We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\mathbf {r}=\lbrace r^{word}_1,\cdots ,r^{word}_{M_1}\rbrace \cup \lbrace r^{rel}_1,\cdots ,r^{rel}_{M_2}\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\mathbf {B}^{word}_{1:M_1}:\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\mathbf {\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\mathbf {h}^r$ . Different Abstractions of Questions Representations From Table 1 , we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\mathbf {q}=\lbrace q_1,\cdots ,q_N\rbrace $ and gets hidden representations $\mathbf {\Gamma }^{(1)}_{1:N}=[\mathbf {\gamma }^{(1)}_1;\cdots ;\mathbf {\gamma }^{(1)}_N]$ . The second-layer BiLSTM works on $\mathbf {\Gamma }^{(1)}_{1:N}$ to get the second set of hidden representations $\mathbf {\Gamma }^{(2)}_{1:N}$ . Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. Hierarchical Matching between Relation and Question Now we have question contexts of different lengths encoded in $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\Gamma }^{(2)}_{1:N}$ . Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1 , the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of. We could perform the above hierarchical matching by computing the similarity between each layer of $\mathbf {\Gamma }$ and $\mathbf {h}^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2 ). Our analysis in Section "Relation Detection Results" shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks BIBREF23 for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each $\mathbf {\gamma }^{(1)}_i$ and $\mathbf {\gamma }^{(2)}_i$ , resulting in a $\mathbf {\gamma }^{^{\prime }}_i=\mathbf {\gamma }^{(1)}_i + \mathbf {\gamma }^{(2)}_i$ for each position $i$ . Then the final question representation $\mathbf {h}^q$ becomes a max-pooling over all $\mathbf {\gamma }^{^{\prime }}_i$ s, 1 $\le $ i $\le $ $N$ . (2) Applying max-pooling on $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\gamma }^{(2)}_i$0 to get $\mathbf {\gamma }^{(2)}_i$1 and $\mathbf {\gamma }^{(2)}_i$2 , respectively, then setting $\mathbf {\gamma }^{(2)}_i$3 . Finally we compute the matching score of $\mathbf {\gamma }^{(2)}_i$4 given $\mathbf {\gamma }^{(2)}_i$5 as $\mathbf {\gamma }^{(2)}_i$6 . Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\mathbf {r}^+$ and other relations $\mathbf {r}^-$ in the candidate pool $R$ . $$l_{\mathrm {rel}} = \max \lbrace 0, \gamma - s_{\mathrm {rel}}(\mathbf {r}^+; \mathbf {q}) + s_{\mathrm {rel}}(\mathbf {r}^-; \mathbf {q})\rbrace \nonumber $$ (Eq. 12) where $\gamma $ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model. Another way of hierarchical matching consists in relying on attention mechanism, e.g. BIBREF24 , to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2 ). KBQA Enhanced by Relation Detection This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work BIBREF4 , BIBREF5 , our KBQA system takes an existing entity linker to produce the top- $K$ linked entities, $EL_K(q)$ , for a question $q$ (“initial entity linking”). Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm "KBQA Enhanced by Relation Detection" . [htbp] InputInput OutputOutput Top query tuple $(\hat{e},\hat{r}, \lbrace (c, r_c)\rbrace )$ Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$ ; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL^{\prime }_{K^{\prime }}(q)$ containing the top- $K^{\prime }$ entity candidates (Section "Entity Re-Ranking" ) Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token $<$ e $>$ (Section "Relation Detection" ) Query Generation: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section "Query Generation" ) Constraint Detection (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $EL_K(q)$0 (connecting by a relation $EL_K(q)$1 ) , add the high scoring $EL_K(q)$2 and $EL_K(q)$3 to the query (Section "Constraint Detection" ). KBQA with two-step relation detection Compared to previous approaches, the main difference is that we have an additional entity re-ranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1 (a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions. Sections "Entity Re-Ranking" and "Relation Detection" elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. Entity Re-Ranking In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$ . We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. "Improved KB Relation Detection" . For each question $q$ , after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ( $R^{l}_q$ ) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$ , given the original entity linker score $s_{linker}$ , and the score of the most confident relation $r\in R_q^{l} \cap R_e$ , we sum these two scores to re-rank the entities: $$s_{\mathrm {rerank}}(e;q) =& \alpha \cdot s_{\mathrm {linker}}(e;q) \nonumber \\ + & (1-\alpha ) \cdot \max _{r \in R_q^{l} \cap R_e} s_{\mathrm {rel}}(r;q).\nonumber $$ (Eq. 15) Finally, we select top $K^{\prime }$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K^{\prime }}^{^{\prime }}(q)$ . We use the same example in Fig 1 (a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes_written”, “author_of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes_written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking. Relation Detection In this step, for each candidate entity $e \in EL_K^{\prime }(q)$ , we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB. Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$ 's entity mention in $q$ with a token “ $<$ e $>$ ”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$ : $s_{rel} (r;e,q)$ . Query Generation Finally, the system outputs the $<$ entity, relation (or core-chain) $>$ pair $(\hat{e}, \hat{r})$ according to: $$s(\hat{e}, \hat{r}; q) =& \max _{e \in EL_{K^{\prime }}^{^{\prime }}(q), r \in R_e} \left( \beta \cdot s_{\mathrm {rerank}}(e;q) \right. \nonumber \\ &\left.+ (1-\beta ) \cdot s_{\mathrm {rel}} (r;e,q) \right), \nonumber $$ (Eq. 19) where $\beta $ is a hyperparameter to be tuned. Constraint Detection Similar to BIBREF4 , we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps, for each node $v$ (answer node or the CVT node like in Figure 1 (b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$ ) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each $n$ -gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta $ (tuned on training set), we will add the constraint entity $c$ (and $r_c$ ) to the query by attaching it to the corresponding node $v$ on the core-chain. Experiments Task Introduction & Settings We use the SimpleQuestions BIBREF2 and WebQSP BIBREF25 datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) BIBREF2 , in order to compare with previous research. yin2016simple also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results. Therefore, our results can be compared with their reported results on both tasks. WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following yih-EtAl:2016:P16-2, we use S-MART BIBREF26 entity-linking outputs. In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set. For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\le $ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400}); (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section "Hierarchical Matching between Relation and Question" ); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section "Relation Detection" and Figure 1 ). All word vectors are initialized with 300- $d$ pretrained word embeddings BIBREF27 . The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. Relation Detection Results Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section "Hierarchical Matching between Relation and Question" ). For the attention-based baseline, we tried the model from BIBREF24 and its one-way variations, where the one-way model gives better results. Note that residual learning significantly helps on WebQSP (80.65% to 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. KBQA End-Task Results Table 3 compares our system with two published baselines (1) STAGG BIBREF4 , the state-of-the-art on WebQSP and (2) AMPCNN BIBREF20 , the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3 ). Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in BIBREF4 , constraint detection is crucial for our system. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section "Relation Detection Results" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. Conclusion KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in BIBREF31 , to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions BIBREF32 and ComplexQuestions BIBREF30 to handle more characteristics of general QA. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: On which benchmarks they achieve the state of the art? Only give me the answer and do not output any other words.
Answer:
[ "SimpleQuestions, WebQSP", "WebQSP, SimpleQuestions" ]
276
6,650
single_doc_qa
d78ed8a8018c440cbdc16fa8998e9e5b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Knowledge Base Question Answering (KBQA) systems answer questions by obtaining information from KB tuples BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . For an input question, these systems typically generate a KB query, which can be executed to retrieve the answers from a KB. Figure 1 illustrates the process used to parse two sample questions in a KBQA system: (a) a single-relation question, which can be answered with a single $<$ head-entity, relation, tail-entity $>$ KB tuple BIBREF6 , BIBREF7 , BIBREF2 ; and (b) a more complex case, where some constraints need to be handled for multiple entities in the question. The KBQA system in the figure performs two key tasks: (1) entity linking, which links $n$ -grams in questions to KB entities, and (2) relation detection, which identifies the KB relation(s) a question refers to. The main focus of this work is to improve the relation detection subtask and further explore how it can contribute to the KBQA system. Although general relation detection methods are well studied in the NLP community, such studies usually do not take the end task of KBQA into consideration. As a result, there is a significant gap between general relation detection studies and KB-specific relation detection. First, in most general relation detection tasks, the number of target relations is limited, normally smaller than 100. In contrast, in KBQA even a small KB, like Freebase2M BIBREF2 , contains more than 6,000 relation types. Second, relation detection for KBQA often becomes a zero-shot learning task, since some test instances may have unseen relations in the training data. For example, the SimpleQuestions BIBREF2 data set has 14% of the golden test relations not observed in golden training tuples. Third, as shown in Figure 1 (b), for some KBQA tasks like WebQuestions BIBREF0 , we need to predict a chain of relations instead of a single relation. This increases the number of target relation types and the sizes of candidate relation pools, further increasing the difficulty of KB relation detection. Owing to these reasons, KB relation detection is significantly more challenging compared to general relation detection tasks. This paper improves KB relation detection to cope with the problems mentioned above. First, in order to deal with the unseen relations, we propose to break the relation names into word sequences for question-relation matching. Second, noticing that original relation names can sometimes help to match longer question contexts, we propose to build both relation-level and word-level relation representations. Third, we use deep bidirectional LSTMs (BiLSTMs) to learn different levels of question representations in order to match the different levels of relation information. Finally, we propose a residual learning method for sequence matching, which makes the model training easier and results in more abstract (deeper) question representations, thus improves hierarchical matching. In order to assess how the proposed improved relation detection could benefit the KBQA end task, we also propose a simple KBQA implementation composed of two-step relation detection. Given an input question and a set of candidate entities retrieved by an entity linker based on the question, our proposed relation detection model plays a key role in the KBQA process: (1) Re-ranking the entity candidates according to whether they connect to high confident relations detected from the raw question text by the relation detection model. This step is important to deal with the ambiguities normally present in entity linking results. (2) Finding the core relation (chains) for each topic entity selection from a much smaller candidate entity set after re-ranking. The above steps are followed by an optional constraint detection step, when the question cannot be answered by single relations (e.g., multiple entities in the question). Finally the highest scored query from the above steps is used to query the KB for answers. Our main contributions include: (i) An improved relation detection model by hierarchical matching between questions and relations with residual learning; (ii) We demonstrate that the improved relation detector enables our simple KBQA system to achieve state-of-the-art results on both single-relation and multi-relation KBQA tasks. Background: Different Granularity in KB Relations Previous research BIBREF4 , BIBREF20 formulates KB relation detection as a sequence matching problem. However, while the questions are natural word sequences, how to represent relations as sequences remains a challenging problem. Here we give an overview of two types of relation sequence representations commonly used in previous work. (1) Relation Name as a Single Token (relation-level). In this case, each relation name is treated as a unique token. The problem with this approach is that it suffers from the low relation coverage due to limited amount of training data, thus cannot generalize well to large number of open-domain relations. For example, in Figure 1 , when treating relation names as single tokens, it will be difficult to match the questions to relation names “episodes_written” and “starring_roles” if these names do not appear in training data – their relation embeddings $\mathbf {h}^r$ s will be random vectors thus are not comparable to question embeddings $\mathbf {h}^q$ s. (2) Relation as Word Sequence (word-level). In this case, the relation is treated as a sequence of words from the tokenized relation name. It has better generalization, but suffers from the lack of global information from the original relation names. For example in Figure 1 (b), when doing only word-level matching, it is difficult to rank the target relation “starring_roles” higher compared to the incorrect relation “plays_produced”. This is because the incorrect relation contains word “plays”, which is more similar to the question (containing word “play”) in the embedding space. On the other hand, if the target relation co-occurs with questions related to “tv appearance” in training, by treating the whole relation as a token (i.e. relation id), we could better learn the correspondence between this token and phrases like “tv show” and “play on”. The two types of relation representation contain different levels of abstraction. As shown in Table 1 , the word-level focuses more on local information (words and short phrases), and the relation-level focus more on global information (long phrases and skip-grams) but suffer from data sparsity. Since both these levels of granularity have their own pros and cons, we propose a hierarchical matching approach for KB relation detection: for a candidate relation, our approach matches the input question to both word-level and relation-level representations to get the final ranking score. Section "Improved KB Relation Detection" gives the details of our proposed approach. Improved KB Relation Detection This section describes our hierarchical sequence matching with residual learning approach for relation detection. In order to match the question to different aspects of a relation (with different abstraction levels), we deal with three problems as follows on learning question/relation representations. Relation Representations from Different Granularity We provide our model with both types of relation representation: word-level and relation-level. Therefore, the input relation becomes $\mathbf {r}=\lbrace r^{word}_1,\cdots ,r^{word}_{M_1}\rbrace \cup \lbrace r^{rel}_1,\cdots ,r^{rel}_{M_2}\rbrace $ , where the first $M_1$ tokens are words (e.g. {episode, written}), and the last $M_2$ tokens are relation names, e.g., {episode_written} or {starring_roles, series} (when the target is a chain like in Figure 1 (b)). We transform each token above to its word embedding then use two BiLSTMs (with shared parameters) to get their hidden representations $[\mathbf {B}^{word}_{1:M_1}:\mathbf {B}^{rel}_{1:M_2}]$ (each row vector $\mathbf {\beta }_i$ is the concatenation between forward/backward representations at $i$ ). We initialize the relation sequence LSTMs with the final state representations of the word sequence, as a back-off for unseen relations. We apply one max-pooling on these two sets of vectors and get the final relation representation $\mathbf {h}^r$ . Different Abstractions of Questions Representations From Table 1 , we can see that different parts of a relation could match different contexts of question texts. Usually relation names could match longer phrases in the question and relation words could match short phrases. Yet different words might match phrases of different lengths. As a result, we hope the question representations could also comprise vectors that summarize various lengths of phrase information (different levels of abstraction), in order to match relation representations of different granularity. We deal with this problem by applying deep BiLSTMs on questions. The first-layer of BiLSTM works on the word embeddings of question words $\mathbf {q}=\lbrace q_1,\cdots ,q_N\rbrace $ and gets hidden representations $\mathbf {\Gamma }^{(1)}_{1:N}=[\mathbf {\gamma }^{(1)}_1;\cdots ;\mathbf {\gamma }^{(1)}_N]$ . The second-layer BiLSTM works on $\mathbf {\Gamma }^{(1)}_{1:N}$ to get the second set of hidden representations $\mathbf {\Gamma }^{(2)}_{1:N}$ . Since the second BiLSTM starts with the hidden vectors from the first layer, intuitively it could learn more general and abstract information compared to the first layer. Note that the first(second)-layer of question representations does not necessarily correspond to the word(relation)-level relation representations, instead either layer of question representations could potentially match to either level of relation representations. This raises the difficulty of matching between different levels of relation/question representations; the following section gives our proposal to deal with such problem. Hierarchical Matching between Relation and Question Now we have question contexts of different lengths encoded in $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\Gamma }^{(2)}_{1:N}$ . Unlike the standard usage of deep BiLSTMs that employs the representations in the final layer for prediction, here we expect that two layers of question representations can be complementary to each other and both should be compared to the relation representation space (Hierarchical Matching). This is important for our task since each relation token can correspond to phrases of different lengths, mainly because of syntactic variations. For example in Table 1 , the relation word written could be matched to either the same single word in the question or a much longer phrase be the writer of. We could perform the above hierarchical matching by computing the similarity between each layer of $\mathbf {\Gamma }$ and $\mathbf {h}^r$ separately and doing the (weighted) sum between the two scores. However this does not give significant improvement (see Table 2 ). Our analysis in Section "Relation Detection Results" shows that this naive method suffers from the training difficulty, evidenced by that the converged training loss of this model is much higher than that of a single-layer baseline model. This is mainly because (1) Deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable, the training usually falls to local optima where one layer has good matching scores and the other always has weight close to 0. (2) The training of deeper architectures itself is more difficult. To overcome the above difficulties, we adopt the idea from Residual Networks BIBREF23 for hierarchical matching by adding shortcut connections between two BiLSTM layers. We proposed two ways of such Hierarchical Residual Matching: (1) Connecting each $\mathbf {\gamma }^{(1)}_i$ and $\mathbf {\gamma }^{(2)}_i$ , resulting in a $\mathbf {\gamma }^{^{\prime }}_i=\mathbf {\gamma }^{(1)}_i + \mathbf {\gamma }^{(2)}_i$ for each position $i$ . Then the final question representation $\mathbf {h}^q$ becomes a max-pooling over all $\mathbf {\gamma }^{^{\prime }}_i$ s, 1 $\le $ i $\le $ $N$ . (2) Applying max-pooling on $\mathbf {\Gamma }^{(1)}_{1:N}$ and $\mathbf {\gamma }^{(2)}_i$0 to get $\mathbf {\gamma }^{(2)}_i$1 and $\mathbf {\gamma }^{(2)}_i$2 , respectively, then setting $\mathbf {\gamma }^{(2)}_i$3 . Finally we compute the matching score of $\mathbf {\gamma }^{(2)}_i$4 given $\mathbf {\gamma }^{(2)}_i$5 as $\mathbf {\gamma }^{(2)}_i$6 . Intuitively, the proposed method should benefit from hierarchical training since the second layer is fitting the residues from the first layer of matching, so the two layers of representations are more likely to be complementary to each other. This also ensures the vector spaces of two layers are comparable and makes the second-layer training easier. During training we adopt a ranking loss to maximizing the margin between the gold relation $\mathbf {r}^+$ and other relations $\mathbf {r}^-$ in the candidate pool $R$ . $$l_{\mathrm {rel}} = \max \lbrace 0, \gamma - s_{\mathrm {rel}}(\mathbf {r}^+; \mathbf {q}) + s_{\mathrm {rel}}(\mathbf {r}^-; \mathbf {q})\rbrace \nonumber $$ (Eq. 12) where $\gamma $ is a constant parameter. Fig 2 summarizes the above Hierarchical Residual BiLSTM (HR-BiLSTM) model. Another way of hierarchical matching consists in relying on attention mechanism, e.g. BIBREF24 , to find the correspondence between different levels of representations. This performs below the HR-BiLSTM (see Table 2 ). KBQA Enhanced by Relation Detection This section describes our KBQA pipeline system. We make minimal efforts beyond the training of the relation detection model, making the whole system easy to build. Following previous work BIBREF4 , BIBREF5 , our KBQA system takes an existing entity linker to produce the top- $K$ linked entities, $EL_K(q)$ , for a question $q$ (“initial entity linking”). Then we generate the KB queries for $q$ following the four steps illustrated in Algorithm "KBQA Enhanced by Relation Detection" . [htbp] InputInput OutputOutput Top query tuple $(\hat{e},\hat{r}, \lbrace (c, r_c)\rbrace )$ Entity Re-Ranking (first-step relation detection): Use the raw question text as input for a relation detector to score all relations in the KB that are associated to the entities in $EL_K(q)$ ; use the relation scores to re-rank $EL_K(q)$ and generate a shorter list $EL^{\prime }_{K^{\prime }}(q)$ containing the top- $K^{\prime }$ entity candidates (Section "Entity Re-Ranking" ) Relation Detection: Detect relation(s) using the reformatted question text in which the topic entity is replaced by a special token $<$ e $>$ (Section "Relation Detection" ) Query Generation: Combine the scores from step 1 and 2, and select the top pair $(\hat{e},\hat{r})$ (Section "Query Generation" ) Constraint Detection (optional): Compute similarity between $q$ and any neighbor entity $c$ of the entities along $EL_K(q)$0 (connecting by a relation $EL_K(q)$1 ) , add the high scoring $EL_K(q)$2 and $EL_K(q)$3 to the query (Section "Constraint Detection" ). KBQA with two-step relation detection Compared to previous approaches, the main difference is that we have an additional entity re-ranking step after the initial entity linking. We have this step because we have observed that entity linking sometimes becomes a bottleneck in KBQA systems. For example, on SimpleQuestions the best reported linker could only get 72.7% top-1 accuracy on identifying topic entities. This is usually due to the ambiguities of entity names, e.g. in Fig 1 (a), there are TV writer and baseball player “Mike Kelley”, which is impossible to distinguish with only entity name matching. Having observed that different entity candidates usually connect to different relations, here we propose to help entity disambiguation in the initial entity linking with relations detected in questions. Sections "Entity Re-Ranking" and "Relation Detection" elaborate how our relation detection help to re-rank entities in the initial entity linking, and then those re-ranked entities enable more accurate relation detection. The KBQA end task, as a result, benefits from this process. Entity Re-Ranking In this step, we use the raw question text as input for a relation detector to score all relations in the KB with connections to at least one of the entity candidates in $EL_K(q)$ . We call this step relation detection on entity set since it does not work on a single topic entity as the usual settings. We use the HR-BiLSTM as described in Sec. "Improved KB Relation Detection" . For each question $q$ , after generating a score $s_{rel}(r;q)$ for each relation using HR-BiLSTM, we use the top $l$ best scoring relations ( $R^{l}_q$ ) to re-rank the original entity candidates. Concretely, for each entity $e$ and its associated relations $R_e$ , given the original entity linker score $s_{linker}$ , and the score of the most confident relation $r\in R_q^{l} \cap R_e$ , we sum these two scores to re-rank the entities: $$s_{\mathrm {rerank}}(e;q) =& \alpha \cdot s_{\mathrm {linker}}(e;q) \nonumber \\ + & (1-\alpha ) \cdot \max _{r \in R_q^{l} \cap R_e} s_{\mathrm {rel}}(r;q).\nonumber $$ (Eq. 15) Finally, we select top $K^{\prime }$ $<$ $K$ entities according to score $s_{rerank}$ to form the re-ranked list $EL_{K^{\prime }}^{^{\prime }}(q)$ . We use the same example in Fig 1 (a) to illustrate the idea. Given the input question in the example, a relation detector is very likely to assign high scores to relations such as “episodes_written”, “author_of” and “profession”. Then, according to the connections of entity candidates in KB, we find that the TV writer “Mike Kelley” will be scored higher than the baseball player “Mike Kelley”, because the former has the relations “episodes_written” and “profession”. This method can be viewed as exploiting entity-relation collocation for entity linking. Relation Detection In this step, for each candidate entity $e \in EL_K^{\prime }(q)$ , we use the question text as the input to a relation detector to score all the relations $r \in R_e$ that are associated to the entity $e$ in the KB. Because we have a single topic entity input in this step, we do the following question reformatting: we replace the the candidate $e$ 's entity mention in $q$ with a token “ $<$ e $>$ ”. This helps the model better distinguish the relative position of each word compared to the entity. We use the HR-BiLSTM model to predict the score of each relation $r \in R_e$ : $s_{rel} (r;e,q)$ . Query Generation Finally, the system outputs the $<$ entity, relation (or core-chain) $>$ pair $(\hat{e}, \hat{r})$ according to: $$s(\hat{e}, \hat{r}; q) =& \max _{e \in EL_{K^{\prime }}^{^{\prime }}(q), r \in R_e} \left( \beta \cdot s_{\mathrm {rerank}}(e;q) \right. \nonumber \\ &\left.+ (1-\beta ) \cdot s_{\mathrm {rel}} (r;e,q) \right), \nonumber $$ (Eq. 19) where $\beta $ is a hyperparameter to be tuned. Constraint Detection Similar to BIBREF4 , we adopt an additional constraint detection step based on text matching. Our method can be viewed as entity-linking on a KB sub-graph. It contains two steps: (1) Sub-graph generation: given the top scored query generated by the previous 3 steps, for each node $v$ (answer node or the CVT node like in Figure 1 (b)), we collect all the nodes $c$ connecting to $v$ (with relation $r_c$ ) with any relation, and generate a sub-graph associated to the original query. (2) Entity-linking on sub-graph nodes: we compute a matching score between each $n$ -gram in the input question (without overlapping the topic entity) and entity name of $c$ (except for the node in the original query) by taking into account the maximum overlapping sequence of characters between them (see Appendix A for details and B for special rules dealing with date/answer type constraints). If the matching score is larger than a threshold $\theta $ (tuned on training set), we will add the constraint entity $c$ (and $r_c$ ) to the query by attaching it to the corresponding node $v$ on the core-chain. Experiments Task Introduction & Settings We use the SimpleQuestions BIBREF2 and WebQSP BIBREF25 datasets. Each question in these datasets is labeled with the gold semantic parse. Hence we can directly evaluate relation detection performance independently as well as evaluate on the KBQA end task. SimpleQuestions (SQ): It is a single-relation KBQA task. The KB we use consists of a Freebase subset with 2M entities (FB2M) BIBREF2 , in order to compare with previous research. yin2016simple also evaluated their relation extractor on this data set and released their proposed question-relation pairs, so we run our relation detection model on their data set. For the KBQA evaluation, we also start with their entity linking results. Therefore, our results can be compared with their reported results on both tasks. WebQSP (WQ): A multi-relation KBQA task. We use the entire Freebase KB for evaluation purposes. Following yih-EtAl:2016:P16-2, we use S-MART BIBREF26 entity-linking outputs. In order to evaluate the relation detection models, we create a new relation detection task from the WebQSP data set. For each question and its labeled semantic parse: (1) we first select the topic entity from the parse; and then (2) select all the relations and relation chains (length $\le $ 2) connected to the topic entity, and set the core-chain labeled in the parse as the positive label and all the others as the negative examples. We tune the following hyper-parameters on development sets: (1) the size of hidden states for LSTMs ({50, 100, 200, 400}); (2) learning rate ({0.1, 0.5, 1.0, 2.0}); (3) whether the shortcut connections are between hidden states or between max-pooling results (see Section "Hierarchical Matching between Relation and Question" ); and (4) the number of training epochs. For both the relation detection experiments and the second-step relation detection in KBQA, we have entity replacement first (see Section "Relation Detection" and Figure 1 ). All word vectors are initialized with 300- $d$ pretrained word embeddings BIBREF27 . The embeddings of relation names are randomly initialized, since existing pre-trained relation embeddings (e.g. TransE) usually support limited sets of relation names. We leave the usage of pre-trained relation embeddings to future work. Relation Detection Results Table 2 shows the results on two relation detection tasks. The AMPCNN result is from BIBREF20 , which yielded state-of-the-art scores by outperforming several attention-based methods. We re-implemented the BiCNN model from BIBREF4 , where both questions and relations are represented with the word hash trick on character tri-grams. The baseline BiLSTM with relation word sequence appears to be the best baseline on WebQSP and is close to the previous best result of AMPCNN on SimpleQuestions. Our proposed HR-BiLSTM outperformed the best baselines on both tasks by margins of 2-3% (p $<$ 0.001 and 0.01 compared to the best baseline BiLSTM w/ words on SQ and WQ respectively). Note that using only relation names instead of words results in a weaker baseline BiLSTM model. The model yields a significant performance drop on SimpleQuestions (91.2% to 88.9%). However, the drop is much smaller on WebQSP, and it suggests that unseen relations have a much bigger impact on SimpleQuestions. The bottom of Table 2 shows ablation results of the proposed HR-BiLSTM. First, hierarchical matching between questions and both relation names and relation words yields improvement on both datasets, especially for SimpleQuestions (93.3% vs. 91.2/88.8%). Second, residual learning helps hierarchical matching compared to weighted-sum and attention-based baselines (see Section "Hierarchical Matching between Relation and Question" ). For the attention-based baseline, we tried the model from BIBREF24 and its one-way variations, where the one-way model gives better results. Note that residual learning significantly helps on WebQSP (80.65% to 82.53%), while it does not help as much on SimpleQuestions. On SimpleQuestions, even removing the deep layers only causes a small drop in performance. WebQSP benefits more from residual and deeper architecture, possibly because in this dataset it is more important to handle larger scope of context matching. Finally, on WebQSP, replacing BiLSTM with CNN in our hierarchical matching framework results in a large performance drop. Yet on SimpleQuestions the gap is much smaller. We believe this is because the LSTM relation encoder can better learn the composition of chains of relations in WebQSP, as it is better at dealing with longer dependencies. Next, we present empirical evidences, which show why our HR-BiLSTM model achieves the best scores. We use WebQSP for the analysis purposes. First, we have the hypothesis that training of the weighted-sum model usually falls to local optima, since deep BiLSTMs do not guarantee that the two-levels of question hidden representations are comparable. This is evidenced by that during training one layer usually gets a weight close to 0 thus is ignored. For example, one run gives us weights of -75.39/0.14 for the two layers (we take exponential for the final weighted sum). It also gives much lower training accuracy (91.94%) compared to HR-BiLSTM (95.67%), suffering from training difficulty. Second, compared to our deep BiLSTM with shortcut connections, we have the hypothesis that for KB relation detection, training deep BiLSTMs is more difficult without shortcut connections. Our experiments suggest that deeper BiLSTM does not always result in lower training accuracy. In the experiments a two-layer BiLSTM converges to 94.99%, even lower than the 95.25% achieved by a single-layer BiLSTM. Under our setting the two-layer model captures the single-layer model as a special case (so it could potentially better fit the training data), this result suggests that the deep BiLSTM without shortcut connections might suffers more from training difficulty. Finally, we hypothesize that HR-BiLSTM is more than combination of two BiLSTMs with residual connections, because it encourages the hierarchical architecture to learn different levels of abstraction. To verify this, we replace the deep BiLSTM question encoder with two single-layer BiLSTMs (both on words) with shortcut connections between their hidden states. This decreases test accuracy to 76.11%. It gives similar training accuracy compared to HR-BiLSTM, indicating a more serious over-fitting problem. This proves that the residual and deep structures both contribute to the good performance of HR-BiLSTM. KBQA End-Task Results Table 3 compares our system with two published baselines (1) STAGG BIBREF4 , the state-of-the-art on WebQSP and (2) AMPCNN BIBREF20 , the state-of-the-art on SimpleQuestions. Since these two baselines are specially designed/tuned for one particular dataset, they do not generalize well when applied to the other dataset. In order to highlight the effect of different relation detection models on the KBQA end-task, we also implemented another baseline that uses our KBQA system but replaces HR-BiLSTM with our implementation of AMPCNN (for SimpleQuestions) or the char-3-gram BiCNN (for WebQSP) relation detectors (second block in Table 3 ). Compared to the baseline relation detector (3rd row of results), our method, which includes an improved relation detector (HR-BiLSTM), improves the KBQA end task by 2-3% (4th row). Note that in contrast to previous KBQA systems, our system does not use joint-inference or feature-based re-ranking step, nevertheless it still achieves better or comparable results to the state-of-the-art. The third block of the table details two ablation tests for the proposed components in our KBQA systems: (1) Removing the entity re-ranking step significantly decreases the scores. Since the re-ranking step relies on the relation detection models, this shows that our HR-BiLSTM model contributes to the good performance in multiple ways. Appendix C gives the detailed performance of the re-ranking step. (2) In contrast to the conclusion in BIBREF4 , constraint detection is crucial for our system. This is probably because our joint performance on topic entity and core-chain detection is more accurate (77.5% top-1 accuracy), leaving a huge potential (77.5% vs. 58.0%) for the constraint detection module to improve. Finally, like STAGG, which uses multiple relation detectors (see yih2015semantic for the three models used), we also try to use the top-3 relation detectors from Section "Relation Detection Results" . As shown on the last row of Table 3 , this gives a significant performance boost, resulting in a new state-of-the-art result on SimpleQuestions and a result comparable to the state-of-the-art on WebQSP. Conclusion KB relation detection is a key step in KBQA and is significantly different from general relation extraction tasks. We propose a novel KB relation detection model, HR-BiLSTM, that performs hierarchical matching between questions and KB relations. Our model outperforms the previous methods on KB relation detection tasks and allows our KBQA system to achieve state-of-the-arts. For future work, we will investigate the integration of our HR-BiLSTM into end-to-end systems. For example, our model could be integrated into the decoder in BIBREF31 , to provide better sequence prediction. We will also investigate new emerging datasets like GraphQuestions BIBREF32 and ComplexQuestions BIBREF30 to handle more characteristics of general QA. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is te core component for KBQA? Only give me the answer and do not output any other words.
Answer:
[ "answer questions by obtaining information from KB tuples ", "hierarchical matching between questions and relations with residual learning" ]
276
6,650
single_doc_qa
ecc3f5d345354e9592ffc62c2c236d38
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Headline generation is the process of creating a headline-style sentence given an input article. The research community has been regarding the task of headline generation as a summarization task BIBREF1, ignoring the fundamental differences between headlines and summaries. While summaries aim to contain most of the important information from the articles, headlines do not necessarily need to. Instead, a good headline needs to capture people's attention and serve as an irresistible invitation for users to read through the article. For example, the headline “$2 Billion Worth of Free Media for Trump”, which gives only an intriguing hint, is considered better than the summarization style headline “Measuring Trump’s Media Dominance” , as the former gets almost three times the readers as the latter. Generating headlines with many clicks is especially important in this digital age, because many of the revenues of journalism come from online advertisements and getting more user clicks means being more competitive in the market. However, most existing websites naively generate sensational headlines using only keywords or templates. Instead, this paper aims to learn a model that generates sensational headlines based on an input article without labeled data. To generate sensational headlines, there are two main challenges. Firstly, there is a lack of sensationalism scorer to measure how sensational a headline is. Some researchers have tried to manually label headlines as clickbait or non-clickbait BIBREF2, BIBREF3. However, these human-annotated datasets are usually small and expensive to collect. To capture a large variety of sensationalization patterns, we need a cheap and easy way to collect a large number of sensational headlines. Thus, we propose a distant supervision strategy to collect a sensationalism dataset. We regard headlines receiving lots of comments as sensational samples and the headlines generated by a summarization model as non-sensational samples. Experimental results show that by distinguishing these two types of headlines, we can partially teach the model a sense of being sensational. Secondly, after training a sensationalism scorer on our sensationalism dataset, a natural way to generate sensational headlines is to maximize the sensationalism score using reinforcement learning (RL). However, the following shows an example of a RL model maximizing the sensationalism score by generating a very unnatural sentence, while its sensationalism scorer gave a very high score of 0.99996: UTF8gbsn 十个可穿戴产品的设计原则这消息消息可惜说明 Ten design principles for wearable devices, this message message pity introduction. This happens because the sensationalism scorer can make mistakes and RL can generate unnatural phrases which fools our sensationalism scorer. Thus, how to effectively leverage RL with noisy rewards remains an open problem. To deal with the noisy reward, we introduce Auto-tuned Reinforcement Learning (ARL). Our model automatically tunes the ratio between MLE and RL based on how sensational the training headline is. In this way, we effectively take advantage of RL with a noisy reward to generate headlines that are both sensational and fluent. The major contributions of this paper are as follows: 1) To the best of our knowledge, we propose the first-ever model that tackles the sensational headline generation task with reinforcement learning techniques. 2) Without human-annotated data, we propose a distant supervision strategy to train a sensationalism scorer as a reward function.3) We propose a novel loss function, Auto-tuned Reinforcement Learning, to give dynamic weights to balance between MLE and RL. Our code will be released . Sensationalism Scorer To evaluate the sensationalism intensity score $\alpha _{\text{sen}}$ of a headline, we collect a sensationalism dataset and then train a sensationalism scorer. For the sensationalism dataset collection, we choose headlines with many comments from popular online websites as positive samples. For the negative samples, we propose to use the generated headlines from a sentence summarization model. Intuitively, the summarization model, which is trained to preserve the semantic meaning, will lose the sensationalization ability and thus the generated negative samples will be less sensational than the original one, similar to the obfuscation of style after back-translation BIBREF4. For example, an original headline like UTF8gbsn“一趟挣10万?铁总增开申通、顺丰专列" (One trip to earn 100 thousand? China Railway opens new Shentong and Shunfeng special lines) will become UTF8gbsn“中铁总将增开京广两列快递专列" (China Railway opens two special lines for express) from the baseline model, which loses the sensational phrases of UTF8gbsn“一趟挣10万?" (One trip to earn 100 thousand?) . We then train the sensationalism scorer by classifying sensational and non-sensational headlines using a one-layer CNN with a binary cross entropy loss $L_{\text{sen}}$. Firstly, 1-D convolution is used to extract word features from the input embeddings of a headline. This is followed by a ReLU activation layer and a max-pooling layer along the time dimension. All features from different channels are concatenated together and projected to the sensationalism score by adding another fully connected layer with sigmoid activation. Binary cross entropy is used to compute the loss $L_{\text{sen}}$. Sensationalism Scorer ::: Training Details and Dataset For the CNN model, we choose filter sizes of 1, 3, and 5 respectively. Adam is used to optimize $L_{sen}$ with a learning rate of 0.0001. We set the embedding size as 300 and initialize it from qiu2018revisiting trained on the Weibo corpus with word and character features. We fix the embeddings during training. For dataset collection, we utilize the headlines collected in qin2018automatic, lin2019learning from Tencent News, one of the most popular Chinese news websites, as the positive samples. We follow the same data split as the original paper. As some of the links are not available any more, we get 170,754 training samples and 4,511 validation samples. For the negative training samples collection, we randomly select generated headlines from a pointer generator BIBREF0 model trained on LCSTS dataset BIBREF5 and create a balanced training corpus which includes 351,508 training samples and 9,022 validation samples. To evaluate our trained classifier, we construct a test set by randomly sampling 100 headlines from the test split of LCSTS dataset and the labels are obtained by 11 human annotators. Annotations show that 52% headlines are labeled as positive and 48% headlines as negative by majority voting (The detail on the annotation can be found in Section SECREF26). Sensationalism Scorer ::: Results and Discussion Our classifier achieves 0.65 accuracy and 0.65 averaged F1 score on the test set while a random classifier would only achieve 0.50 accuracy and 0.50 averaged F1 score. This confirms that the predicted sensationalism score can partially capture the sensationalism of headlines. On the other hand, a more natural choice is to take headlines with few comments as negative examples. Thus, we train another baseline classifier on a crawled balanced sensationalism corpus of 84k headlines where the positive headlines have at least 28 comments and the negative headlines have less than 5 comments. However, the results on the test set show that the baseline classifier gets 60% accuracy, which is worse than the proposed classifier (which achieves 65%). The reason could be that the balanced sensationalism corpus are sampled from different distributions from the test set and it is hard for the trained model to generalize. Therefore, we choose the proposed one as our sensationalism scorer. Therefore, our next challenge is to show that how to leverage this noisy sensationalism reward to generate sensational headlines. Sensational Headline Generation Our sensational headline generation model takes an article as input and output a sensational headline. The model consists of a Pointer-Gen headline generator and is trained by ARL. The diagram of ARL can be found in Figure FIGREF6. We denote the input article as $x=\lbrace x_1,x_2,x_3,\cdots ,x_M\rbrace $, and the corresponding headline as $y^*=\lbrace y_1^*,y_2^*,y_3^*,\cdots ,y_T^*\rbrace $, where $M$ is the number of tokens in an article and $T$ is the number of tokens in a headline. Sensational Headline Generation ::: Pointer-Gen Headline Generator We choose Pointer Generator (Pointer-Gen) BIBREF0, a widely used summarization model, as our headline generator for its ability to copy words from the input article. It takes a news article as input and generates a headline. Firstly, the tokens of each article, $\lbrace x_1,x_2,x_3,\cdots ,x_M\rbrace $, are fed into the encoder one-by-one and the encoder generates a sequence of hidden states $h_i$. For each decoding step $t$, the decoder receives the embedding for each token of a headline $y_t$ as input and updates its hidden states $s_t$. An attention mechanism following luong2015effective is used: where $v$, $W_h$, $W_s$, and $b_{attn}$ are the trainable parameters and $h_t^*$ is the context vector. $s_t$ and $h_t^*$ are then combined to give a probability distribution over the vocabulary through two linear layers: where $V$, $b$, $V^{^{\prime }}$, and $b^{^{\prime }}$ are trainable parameters. We use a pointer generator network to enable our model to copy rare/unknown words from the input article, giving the following final word probability: where $x^t$ is the embedding of the input word of the decoder, $w_{h^*}^T$, $w_s^T$, $w_x^T$, and $b_{ptr}$ are trainable parameters, and $\sigma $ is the sigmoid function. Sensational Headline Generation ::: Training Methods We first briefly introduce MLE and RL objective functions, and a naive way to mix these two by a hyper-parameter $\lambda $. Then we point out the challenge of training with noisy reward, and propose ARL to address this issue. Sensational Headline Generation ::: Training Methods ::: MLE and RL A headline generation model can be trained with MLE, RL or a combination of MLE and RL. MLE training is to minimize the negative log likelihood of the training headlines. We feed $y^*$ into the decoder word by word and maximize the likelihood of $y^*$. The loss function for MLE becomes For RL training, we choose the REINFORCE algorithm BIBREF6. In the training phase, after encoding an article, a headline $y^s = \lbrace y_1^s, y_2^s, y_3^s, \cdots , y_T^s\rbrace $ is obtained by sampling from $P(w)$ from our generator, and then a reward of sensationalism or ROUGE(RG) is calculated. We use the baseline reward $\hat{R_t}$ to reduce the variance of the reward, similar to ranzato2015sequence. To elaborate, a linear model is deployed to estimate the baseline reward $\hat{R_t}$ based on $t$-th state $o_t$ for each timestep $t$. The parameters of the linear model are trained by minimizing the mean square loss between $R$ and $\hat{R_t}$: where $W_r$ and $b_r$ are trainable parameters. To maximize the expected reward, our loss function for RL becomes A naive way to mix these two objective functions using a hyper-parameter $\lambda $ has been successfully incorporated in the summarization task BIBREF7. It includes the MLE training as a language model to mitigate the readability and quality issues in RL. The mixed loss function is shown as follows: where $*$ is the reward type. Usually $\lambda $ is large, and paulus2017deep used 0.9984. Sensational Headline Generation ::: Training Methods ::: Auto-tuned Reinforcement Learning Applying the naive mixed training method using sensationalism score as the reward is not obvious/trivial in our task. The main reason is that our sensationalism reward is notably more noisy and more fragile than the ROUGE-L reward or abstractive reward used in the summarization task BIBREF7, BIBREF8. A higher ROUGE-L F1 reward in summarization indicates higher overlapping ratio between generation and true summary statistically, but our sensationalism reward is a learned score which is fragile to be fooled with unnatural samples. To effectively train the model with RL under noisy sensationalism reward, our idea is to balance RL with MLE. However, we argue that the weighted ratio between MLE and RL should be sample-dependent, instead of being fixed for all training samples as in paulus2017deep, kryscinski2018improving. The reason is that, RL and MLE have inconsistent optimization objectives. When the training headline is non-sensational, MLE training will encourage our model to imitate the training headline (thus generating non-sensational headlines), which counteracts the effects of RL training to generate sensational headlines. The sensationalism score is, therefore, used to give dynamic weight to MLE and RL. Our ARL loss function becomes: If $\alpha _{\text{sen}}(y^*)$ is high, meaning the training headline is sensational, our loss function encourages our model to imitate the sample more using the MLE training. If $\alpha _{\text{sen}}(y^*)$ is low, our loss function replies on RL training to improve the sensationalism. Note that the weight $\alpha _{\text{sen}}(y^*)$ is different from our sensationalism reward $\alpha _{\text{sen}}(y^s)$ and we call the loss function Auto-tuned Reinforcement Learning, because the ratio between MLE and RL are well “tuned” towards different samples. Sensational Headline Generation ::: Dataset We use LCSTS BIBREF5 as our dataset to train the summarization model. The dataset is collected from the Chinese microblogging website Sina Weibo. It contains over 2 million Chinese short texts with corresponding headlines given by the author of each text. The dataset is split into 2,400,591 samples for training, 10,666 samples for validation and 725 samples for testing. We tokenize each sentence with Jieba and a vocabulary size of 50000 is saved. Sensational Headline Generation ::: Baselines and Our Models We experiment and compare with the following models. Pointer-Gen is the baseline model trained by optimizing $L_\text{MLE}$ in Equation DISPLAY_FORM13. Pointer-Gen+Pos is the baseline model by training Pointer-Gen only on positive examples whose sensationalism score is larger than 0.5 Pointer-Gen+Same-FT is the model which fine-tunes Pointer-Gen on the training samples whose sensationalism score is larger than 0.1 Pointer-Gen+Pos-FT is the model which fine-tunes Pointer-Gen on the training samples whose sensationalism score is larger than 0.5 Pointer-Gen+RL-ROUGE is the baseline model trained by optimizing $L_\text{RL-ROUGE}$ in Equation DISPLAY_FORM17, with ROUGE-L BIBREF9 as the reward. Pointer-Gen+RL-SEN is the baseline model trained by optimizing $L_\text{RL-SEN}$ in Equation DISPLAY_FORM17, with $\alpha _\text{sen}$ as the reward. Pointer-Gen+ARL-SEN is our model trained by optimizing $L_\text{ARL-SEN}$ in Equation DISPLAY_FORM19, with $\alpha _\text{sen}$ as the reward. Test set is the headlines from the test set. Note that we didn't compare to Pointer-Gen+ARL-ROUGE as it is actually Pointer-GEN. Recall that $\alpha _{\text{sen}}(y^*)$ in Equation DISPLAY_FORM19 measures how good (based on reward function) is $y^*$. Then the loss function for Pointer-Gen+ARL-ROUGE will be We also tried text style transfer baseline BIBREF10, but the generated headlines were very poor (many unknown words and irrelevant). Sensational Headline Generation ::: Training Details MLE training: An Adam optimizer is used with the learning rate of 0.0001 to optimize $L_{\text{MLE}}$. The batch size is set as 128 and a one-layer, bi-directional Long Short-Term Memory (bi-LSTM) model with 512 hidden sizes and a 350 embedding size is utilized. Gradients with the l2 norm larger than 2.0 are clipped. We stop training when the ROUGE-L f-score stops increasing. Hybrid training: An Adam optimizer with a learning rate of 0.0001 is used to optimize $L_{\text{RL-*}}$ (Equation DISPLAY_FORM17) and $L_\text{{ARL-SEN}}$ (Equation DISPLAY_FORM19). When training Pointer-Gen+RL-ROUGE, the best $\lambda $ is chosen based on the ROUGE-L score on the validation set. In our experiment, $\lambda $ is set as 0.95. An Adam optimizer with a learning rate of 0.001 is used to optimize $L_b$. When training Pointer-Gen+ARL-SEN, we don't use the full LCSTS dataset, but only headlines with a sensationalism score larger than 0.1 as we observe that Pointer-Gen+ARL-SEN will generate a few unnatural phrases when using full dataset. We believe the reason is the high ratio of RL during training. Figure FIGREF23 shows that the probability density near 0 is very high, meaning that in each batch, many of the samples will have a very low sensationalism score. On expectation, each sample will receive 0.239 MLE training and 0.761 RL training. This leads to RL dominanting the loss. Thus, we propose to filter samples with a minimum sensationalism score with 0.1 and it works very well. For Pointer-Gen+RL-SEN, we also set the minimum sensationalism score as 0.1, and $\lambda $ is set as 0.5 to remove unnatural phrases, making a fair comparison to Pointer-Gen+ARL-SEN. We stop training Pointer-Gen+Same-FT, Pointer-Gen+Pos-FT, Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN, when $\alpha _\text{sen}$ stops increasing on the validation set. Beam-search with a beam size of 5 is adopted for decoding in all models. Sensational Headline Generation ::: Evaluation Metrics We briefly describe the evaluation metrics below. ROUGE: ROUGE is a commonly used evaluation metric for summarization. It measures the N-gram overlap between generated and training headlines. We use it to evaluate the relevance of generated headlines. The widely used pyrouge toolkit is used to calculate ROUGE-1 (RG-1), ROUGE-2 (RG-2), and ROUGE-L (RG-L). Human evaluation: We randomly sample 50 articles from the test set and send the generated headlines from all models and corresponding headlines in the test set to human annotators. We evaluate the sensationalism and fluency of the headlines by setting up two independent human annotation tasks. We ask 10 annotators to label each headline for each task. For the sensationalism annotation, each annotator is asked one question, “Is the headline sensational?”, and he/she has to choose either `yes' or `no'. The annotators were not told which system the headline is from. The process of distributing samples and recruiting annotators is managed by Crowdflower. After annotation, we define the sensationalism score as the proportion of annotations on all generated headlines from one model labeled as `yes'. For the fluency annotation, we repeat the same procedure as for the sensationalism annotation, except that we ask each annotator the question “Is the headline fluent?” We define the fluency score as the proportion of annotations on all headlines from one specific model labeled as `yes'. We put human annotation instructions in the supplemental material. UTF8gbsn Results We first compare all four models, Pointer-Gen, Pointer-Gen-RL+ROUGE, Pointer-Gen-RL-SEN, and Pointer-Gen-ARL-SEN, to existing models with ROUGE in Table TABREF25 to establish that our model produces relevant headlines and we leave the sensationalism for human evaluation. Note that we only compare our models to commonly used strong summarization baselines, to validate that our implementation achieves comparable performance to existing work. In our implementation, Pointer-Gen achieves a 34.51 RG-1 score, 22.21 RG-2 score, and 31.68 RG-L score, which is similar to the results of gu2016incorporating. Pointer-Gen+ARL-SEN, although optimized for the sensationalism reward, achieves similar performance to our Pointer-Gen baseline, which means that Pointer-Gen+ARL-SEN still keeps its summarization ability. An example of headlines generated from different models in Table TABREF29 shows that Pointer-Gen and Pointer-Gen+RL-ROUGE learns to summarize the main point of the article: “The Nikon D600 camera is reported to have black spots when taking photos”. Pointer-Gen+RL-SEN makes the headline more sensational by blaming Nikon for attributing the damage to the smog. Pointer-Gen+ARL-SEN generates the most sensational headline by exaggerating the result “Getting a serious trouble!” to maximize user's attention. We then compare different models using the sensationalism score in Table TABREF30. The Pointer-Gen baseline model achieves a 42.6% sensationalism score, which is the minimum that a typical summarization model achieves. By filtering out low-sensational headlines, Pointer-Gen+Same-FT and Pointer-Gen+Pos-FT achieves higher sensationalism scores, which implies the effectiveness of our sensationalism scorer. Our Pointer-Gen+ARL-SEN model achieves the best performance of 60.8%. This is an absolute improvement of 18.2% over the Pointer-Gen baseline. The Chi-square test on the results confirms that Pointer-Gen+ARL-SEN is statistically significantly more sensational than all the other baseline models, with the largest p-value less than 0.01. Also, we find that the test set headlines achieves 57.8% sensationalism score, much larger than Pointer-Gen baseline, which also supports our intuition that generated headlines will be less sensational than the original one. On the other hand, we found that Pointer-Gen+Pos is much worse than other baselines. The reason is that training on sensational samples alone discards around 80% of the whole training set that is also helpful for maintaining relevance and a good language model. It shows the necessity of using RL. UTF8gbsn In addition, both Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN, which use the sensationalism score as the reward, obtain statistically better results than Pointer-Gen+RL-ROUGE and Pointer-Gen, with a p-value less than 0.05 by a Chi-square test. This result shows the effectiveness of RL to generate more sensational headlines. The reason is that even though our noisy classifier could also learn to classify domains, the generator during RL training is not allowed to increase the reward by shifting domains but encouraged to generate more sensational headlines, due to the consistency constraint on the domains of the headline and the article. Furthermore, Poiner-Gen+ARL-SEN gets better performance than Pointer-Gen+RL-SEN, which confirms the superiority of the ARL loss function. We also visualize in Figure FIGREF31 a comparison between Pointer-Gen+ARL-SEN and Pointer-Gen+RL-SEN according to how sensational the test set headlines are. The blue bars denote the smaller scores between the two models. For example, if the blue bar is 0.6, it means that the worse model between Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN achieves 0.6. And the color of orange/black further indicates the better model and its score. We find that Pointer-Gen+ARL-SEN outperforms Pointer-Gen+RL-SEN for most cases. The improvement is higher when the test set headlines are not sensational (the sensationalism score is less than 0.5), which may be attributed to the higher ratio of RL training on non-sensational headlines. Apart from the sensationalism evaluation, we measure the fluency of the headlines generated from different models. Fluency scores in Table TABREF30 show that Pointer-Gen+RL-SEN and Pointer-Gen+ARL-SEN achieve comparable fluency performance to Pointer-Gen and Pointer-Gen+RL-ROUGE. Test set headlines achieve the best performance among all models, but the difference is not statistically significant. Also, we observe that fine-tuning on sensational headlines will hurt the performance, both in sensationalism and fluency. After manually checking the outputs, we observe that our model is able to generate sensational headlines using diverse sensationalization strategies. These strategies include, but are not limited to, creating a curiosity gap, asking questions, highlighting numbers, being emotional and emphasizing the user. Examples can be found in Table TABREF32. Related Work Our work is related to summarization tasks. An encoder-decoder model was first applied to two sentence-level abstractive summarization tasks on the DUC-2004 and Gigaword datasets BIBREF12. This model was later extended by selective encoding BIBREF13, a coarse to fine approach BIBREF14, minimum risk training BIBREF1, and topic-aware models BIBREF15. As long summaries were recognized as important, the CNN/Daily Mail dataset was used in nallapati2016abstractive. Graph-based attention BIBREF16, pointer-generator with coverage loss BIBREF0 are further developed to improve the generated summaries. celikyilmaz2018deep proposed deep communicating agents for representing a long document for abstractive summarization. In addition, many papers BIBREF17, BIBREF18, BIBREF19 use extractive methods to directly select sentences from articles. However, none of these work considered the sensationalism of generated outputs. RL is also gaining popularity as it can directly optimize non-differentiable metrics BIBREF20, BIBREF21, BIBREF22. paulus2017deep proposed an intra-decoder model and combined RL and MLE to deal with summaries with bad qualities. RL has also been explored with generative adversarial networks (GANs) BIBREF23. liu2017generative applied GANs on summarization task and achieved better performance. niu2018polite tackles the problem of polite generation with politeness reward. Our work is different in that we propose a novel function to balance RL and MLE. Our task is also related to text style transfer. Implicit methods BIBREF10, BIBREF24, BIBREF4 transfer the styles by separating sentence representations into content and style, for example using back-translationBIBREF4. However, these methods cannot guarantee the content consistency between the original sentence and transferred output BIBREF25. Explicit methods BIBREF26, BIBREF25 transfer the style by directly identifying style related keywords and modifying them. However, sensationalism is not always restricted to keywords, but the full sentence. By leveraging small human labeled English dataset, clickbait detection has been well investigated BIBREF2, BIBREF27, BIBREF3. However, these human labeled dataset are not available for other languages, such as Chinese. Modeling sensationalism is also related to modeling emotion. Emotion has been well investigated in both word levelBIBREF28, BIBREF29 and sentence levelBIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34. It has also been considered an important factor in engaging interactive systemsBIBREF35, BIBREF36, BIBREF37. Although we observe that sensational headlines contain emotion, it is still not clear which emotion and how emotions will influence the sensationalism. Conclusion and Future Work In this paper, we propose a model that generates sensational headlines without labeled data using Reinforcement Learning. Firstly, we propose a distant supervision strategy to train the sensationalism scorer. As a result, we achieve 65% accuracy between the predicted sensationalism score and human evaluation. To effectively leverage this noisy sensationalism score as the reward for RL, we propose a novel loss function, ARL, to automatically balance RL with MLE. Human evaluation confirms the effectiveness of both our sensationalism scorer and ARL to generate more sensational headlines. Future work can be improving the sensationalism scorer and investigating the applications of dynamic balancing methods between RL and MLE in textGANBIBREF23. Our work also raises the ethical questions about generating sensational headlines, which can be further explored. Acknowledgments Thanks to ITS/319/16FP of Innovation Technology Commission, HKUST 16248016 of Hong Kong Research Grants Council for funding. In addition, we thank Zhaojiang Lin for helpful discussion and Yan Xu, Zihan Liu for the data collection. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which baselines are used for evaluation? Only give me the answer and do not output any other words.
Answer:
[ "Pointer-Gen, Pointer-Gen+Pos, Pointer-Gen+Same-FT, Pointer-Gen+Pos-FT, Pointer-Gen+RL-ROUGE, Pointer-Gen+RL-SEN" ]
276
6,169
single_doc_qa
5cd35a6ad41d4c2f9b7f17893245bf2e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Microblogging such as Twitter and Weibo is a popular social networking service, which allows users to post messages up to 140 characters. There are millions of active users on the platform who stay connected with friends. Unfortunately, spammers also use it as a tool to post malicious links, send unsolicited messages to legitimate users, etc. A certain amount of spammers could sway the public opinion and cause distrust of the social platform. Despite the use of rigid anti-spam rules, human-like spammers whose homepages having photos, detailed profiles etc. have emerged. Unlike previous "simple" spammers, whose tweets contain only malicious links, those "smart" spammers are more difficult to distinguish from legitimate users via content-based features alone BIBREF0 . There is a considerable amount of previous work on spammer detection on social platforms. Researcher from Twitter Inc. BIBREF1 collect bot accounts and perform analysis on the user behavior and user profile features. Lee et al. lee2011seven use the so-called social honeypot by alluring social spammers' retweet to build a benchmark dataset, which has been extensively explored in our paper. Some researchers focus on the clustering of urls in tweets and network graph of social spammers BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , showing the power of social relationship features.As for content information modeling, BIBREF6 apply improved sparse learning methods. However, few studies have adopted topic-based features. Some researchers BIBREF7 discuss topic characteristics of spamming posts, indicating that spammers are highly likely to dwell on some certain topics such as promotion. But this may not be applicable to the current scenario of smart spammers. In this paper, we propose an efficient feature extraction method. In this method, two new topic-based features are extracted and used to discriminate human-like spammers from legitimate users. We consider the historical tweets of each user as a document and use the Latent Dirichlet Allocation (LDA) model to compute the topic distribution for each user. Based on the calculated topic probability, two topic-based features, the Local Outlier Standard Score (LOSS) which captures the user's interests on different topics and the Global Outlier Standard Score (GOSS) which reveals the user's interests on specific topic in comparison with other users', are extracted. The two features contain both local and global information, and the combination of them can distinguish human-like spammers effectively. To the best of our knowledge, it is the first time that features based on topic distributions are used in spammer classification. Experimental results on one public dataset and one self-collected dataset further validate that the two sets of extracted topic-based features get excellent performance on human-like spammer classification problem compared with other state-of-the-art methods. In addition, we build a Weibo dataset, which contains both legitimate users and spammers. To summarize, our major contributions are two-fold: In the following sections, we first propose the topic-based features extraction method in Section 2, and then introduce the two datasets in Section 3. Experimental results are discussed in Section 4, and we conclude the paper in Section 5. Future work is presented in Section 6. Methodology In this section, we first provide some observations we obtained after carefully exploring the social network, then the LDA model is introduced. Based on the LDA model, the ways to obtain the topic probability vector for each user and the two topic-based features are provided. Observation After exploring the homepages of a substantial number of spammers, we have two observations. 1) social spammers can be divided into two categories. One is content polluters, and their tweets are all about certain kinds of advertisement and campaign. The other is fake accounts, and their tweets resemble legitimate users' but it seems they are simply random copies of others to avoid being detected by anti-spam rules. 2) For legitimate users, content polluters and fake accounts, they show different patterns on topics which interest them. Legitimate users mainly focus on limited topics which interest him. They seldom post contents unrelated to their concern. Content polluters concentrate on certain topics. Fake accounts focus on a wide range of topics due to random copying and retweeting of other users' tweets. Spammers and legitimate users show different interests on some topics e.g. commercial, weather, etc. To better illustrate our observation, Figure. 1 shows the topic distribution of spammers and legitimate users in two employed datasets(the Honeypot dataset and Weibo dataset). We can see that on both topics (topic-3 and topic-11) there exists obvious difference between the red bars and green bars, representing spammers and legitimate users. On the Honeypot dataset, spammers have a narrower shape of distribution (the outliers on the red bar tail are not counted) than that of legitimate users. This is because there are more content polluters than fake accounts. In other word, spammers in this dataset tend to concentrate on limited topics. While on the Weibo dataset, fake accounts who are interested in different topics take large proportion of spammers. Their distribution is more flat (i.e. red bars) than that of the legitimate users. Therefore we can detect spammers by means of the difference of their topic distribution patterns. LDA model Blei et al.blei2003latent first presented Latent Dirichlet Allocation(LDA) as an example of topic model. Each document $i$ is deemed as a bag of words $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $ and $M$ is the number of words. Each word is attributable to one of the document's topics $Z=\left\lbrace z_{i1},z_{i2},...,z_{iK}\right\rbrace $ and $K$ is the number of topics. $\psi _{k}$ is a multinomial distribution over words for topic $k$ . $\theta _i$ is another multinomial distribution over topics for document $i$ . The smoothed generative model is illustrated in Figure. 2 . $\alpha $ and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $0 are hyper parameter that affect scarcity of the document-topic and topic-word distributions. In this paper, $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $1 , $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $2 and $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $3 are empirically set to 0.3, 0.01 and 15. The entire content of each Twitter user is regarded as one document. We adopt Gibbs Sampling BIBREF8 to speed up the inference of LDA. Based on LDA, we can get the topic probabilities for all users in the employed dataset as: $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $4 , where $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $5 is the number of users. Each element $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $6 is a topic probability vector for the $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $7 document. $W=\left\lbrace w_{i1},w_{i2},...,w_{iM}\right\rbrace $8 is the raw topic probability vector and our features are developed on top of it. Topic-based Features Using the LDA model, each person in the dataset is with a topic probability vector $X_i$ . Assume $x_{ik}\in X_{i}$ denotes the likelihood that the $\emph {i}^{th}$ tweet account favors $\emph {k}^{th}$ topic in the dataset. Our topic based features can be calculated as below. Global Outlier Standard Score measures the degree that a user's tweet content is related to a certain topic compared to the other users. Specifically, the "GOSS" score of user $i$ on topic $k$ can be calculated as Eq.( 12 ): $$\centering \begin{array}{ll} \mu \left(x_{k}\right)=\frac{\sum _{i=1}^{n} x_{ik}}{n},\\ GOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_k\right)}{\sqrt{\underset{i}{\sum }\left(x_{ik}-\mu \left(x_{k}\right)\right)^{2}}}. \end{array}$$ (Eq. 12) The value of $GOSS\left(x_{ik}\right)$ indicates the interesting degree of this person to the $\emph {k}^{th}$ topic. Specifically, if $GOSS\left(x_{ik}\right)$ > $GOSS\left(x_{jk}\right)$ , it means that the $\emph {i}^{th}$ person has more interest in topic $k$ than the $\emph {j}^{th}$ person. If the value $GOSS\left(x_{ik}\right)$ is extremely high or low, the $\emph {i}^{th}$ person showing extreme interest or no interest on topic $k$ which will probably be a distinctive pattern in the fowllowing classfication. Therefore, the topics interested or disliked by the $\emph {k}^{th}$0 person can be manifested by $\emph {k}^{th}$1 , from which the pattern of the interested topics with regarding to this person is found. Denote $\emph {k}^{th}$2 our first topic-based feature, and it hopefully can get good performance on spammer detection. Local Outlier Standard Score measures the degree of interest someone shows to a certain topic by considering his own homepage content only. For instance, the "LOSS" score of account $i$ on topic $k$ can be calculated as Eq.( 13 ): $$\centering \begin{array}{ll} \mu \left(x_{i}\right)=\frac{\sum _{k=1}^{K} x_{ik}}{K},\\ LOSS\left(x_{ik}\right)=\frac{x_{ik}-\mu \left(x_i\right)}{\sqrt{\underset{k}{\sum }\left(x_{ik}-\mu \left(x_{i}\right)\right)^{2}}}. \end{array}$$ (Eq. 13) $\mu (x_i)$ represents the averaged interesting degree for all topics with regarding to $\emph {i}^{th}$ user and his tweet content. Similarly to $GOSS$ , the topics interested or disliked by the $\emph {i}^{th}$ person via considering his single post information can be manifested by $f_{LOSS}^{i}=[LOSS(x_{i1})\cdots LOSS(x_{iK})]$ , and $LOSS$ becomes our second topic-based features for the $\emph {i}^{th}$ person. Dataset We use one public dataset Social Honeypot dataset and one self-collected dataset Weibo dataset to validate the effectiveness of our proposed features. Social Honeypot Dataset: Lee et al. lee2010devils created and deployed 60 seed social accounts on Twitter to attract spammers by reporting back what accounts interact with them. They collected 19,276 legitimate users and 22,223 spammers in their datasets along with their tweet content in 7 months. This is our first test dataset. Our Weibo Dataset: Sina Weibo is one of the most famous social platforms in China. It has implemented many features from Twitter. The 2197 legitimate user accounts in this dataset are provided by the Tianchi Competition held by Sina Weibo. The spammers are all purchased commercially from multiple vendors on the Internet. We checked them manually and collected 802 suitable "smart" spammers accounts. Preprocessing: Before directly performing the experiments on the employed datasets, we first delete some accounts with few posts in the two employed since the number of tweets is highly indicative of spammers. For the English Honeypot dataset, we remove stopwords, punctuations, non-ASCII words and apply stemming. For the Chinese Weibo dataset, we perform segmentation with "Jieba", a Chinese text segmentation tool. After preprocessing steps, the Weibo dataset contains 2197 legitimate users and 802 spammers, and the honeypot dataset contains 2218 legitimate users and 2947 spammers. It is worth mentioning that the Honeypot dataset has been slashed because most of the Twitter accounts only have limited number of posts, which are not enough to show their interest inclination. Evaluation Metrics The evaluating indicators in our model are show in 2 . We calculate precision, recall and F1-score (i.e. F1 score) as in Eq. ( 19 ). Precision is the ratio of selected accounts that are spammers. Recall is the ratio of spammers that are detected so. F1-score is the harmonic mean of precision and recall. $$precision =\frac{TP}{TP+FP}, recall =\frac{TP}{TP+FN}\nonumber \\ F1-score = \frac{2\times precision \times recall}{precision + recall}$$ (Eq. 19) Performance Comparisons with Baseline Three baseline classification methods: Support Vector Machines (SVM), Adaboost, and Random Forests are adopted to evaluate our extracted features. We test each classification algorithm with scikit-learn BIBREF9 and run a 10-fold cross validation. On each dataset, the employed classifiers are trained with individual feature first, and then with the combination of the two features. From 1 , we can see that GOSS+LOSS achieves the best performance on F1-score among all others. Besides, the classification by combination of LOSS and GOSS can increase accuracy by more than 3% compared with raw topic distribution probability. Comparison with Other Features To compare our extracted features with previously used features for spammer detection, we use three most discriminative feature sets according to Lee et al. lee2011seven( 4 ). Two classifiers (Adaboost and SVM) are selected to conduct feature performance comparisons. Using Adaboost, our LOSS+GOSS features outperform all other features except for UFN which is 2% higher than ours with regard to precision on the Honeypot dataset. It is caused by the incorrectly classified spammers who are mostly news source after our manual check. They keep posting all kinds of news pieces covering diverse topics, which is similar to the behavior of fake accounts. However, UFN based on friendship networks is more useful for public accounts who possess large number of followers. The best recall value of our LOSS+GOSS features using SVM is up to 6% higher than the results by other feature groups. Regarding F1-score, our features outperform all other features. To further show the advantages of our proposed features, we compare our combined LOSS+GOSS with the combination of all the features from Lee et al. lee2011seven (UFN+UC+UH). It's obvious that LOSS+GOSS have a great advantage over UFN+UC+UH in terms of recall and F1-score. Moreover, by combining our LOSS+GOSS features and UFN+UC+UH features together, we obtained another 7.1% and 2.3% performance gain with regard to precision and F1-score by Adaboost. Though there is a slight decline in terms of recall. By SVM, we get comparative results on recall and F1-score but about 3.5% improvement on precision. Conclusion In this paper, we propose a novel feature extraction method to effectively detect "smart" spammers who post seemingly legitimate tweets and are thus difficult to identify by existing spammer classification methods. Using the LDA model, we obtain the topic probability for each Twitter user. By utilizing the topic probability result, we extract our two topic-based features: GOSS and LOSS which represent the account with global and local information. Experimental results on a public dataset and a self-built Chinese microblog dataset validate the effectiveness of the proposed features. Future Work In future work, the combination method of local and global information can be further improved to maximize their individual strengths. We will also apply decision theory to enhancing the performance of our proposed features. Moreover, we are also building larger datasets on both Twitter and Weibo to validate our method. Moreover, larger datasets on both Twitter and Weibo will be built to further validate our method. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: LDA is an unsupervised method; is this paper introducing an unsupervised approach to spam detection? Only give me the answer and do not output any other words.
Answer:
[ "No", "No" ]
276
3,552
single_doc_qa
62d71a0cc76b433890cda922471b2890
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: CONTOUR COMPLETION USING DEEP STRUCTURAL PRIORS Publish Date: 9 Feb 2023 Author List: Ali Shiraee, Morteza Rezanejad, Mohammad Khodadad, Dirk Walther, Hamidreza Mahyar Figure Figure 1: Just by looking at subfigure (a), we, as humans, can easily perceive a shape like the one in subfigure (b).This is an extraordinary capability of our human brain and in this paper, we tried to see whether convolutional neural networks can show such capabilities. Figure 2: The trajectory from random noise X N to the incomplete image X I in image space.The network will pass a completed version of the image, X C , throughout this trajectory. Figure 4: This figure shows our iterative process to complete the fragmented contours of an image given as input to our pipeline. Figure 5: This example shows how different scores change throughout a single run.All three scores change in the range of [0, 100].Our goal is to maximize reconstruction_score and minimize the overfit_score, but we should consider that the minimization lower bound is data dependent and is not zero. Figure 6: Evolutionary process of the deep structure prior.The right column shows the incomplete shapes given to the model and the rest of the columns show how the model is overfitting gradually to produce the incomplete shapes.In each column, we are showing an intermediate iteration of this process.The loss-term setup enables our pipeline to let the completed image appears during this iterative process. Average MSE and IoU values between the incomplete (Raw) images, the output of DIP and DSP methods, and ground truth for each image are provided in this table. For this experiment, we ran the model over a subset of complex dataset with 500 incomplete images at various levels of alpha for 250 iterations.After the image completion is done, we compared the evaluation metrics between the completed image and the ground truth to examine the performance of the model for different values of alpha. In this table, we show the effect of the receptive filter size on our algorithm's capability to fill in bigger gap sizes.The numbers in this table are showing the percentage of the time that DIP was successful to complete shapes with each gap size and corresponding receptive field size.As predicted, the bigger the filter size, the more successful the algorithm is in filling in the gaps. abstract Humans can easily perceive illusory contours and complete missing forms in fragmented shapes. This work investigates whether such capability can arise in convolutional neural networks (CNNs) using deep structural priors computed directly from images. In this work, we present a framework that completes disconnected contours and connects fragmented lines and curves. In our framework, we propose a model that does not even need to know which regions of the contour are eliminated. We introduce an iterative process that completes an incomplete image and we propose novel measures that guide this to find regions it needs to complete. Our model trains on a single image and fills in the contours with no additional training data. Our work builds a robust framework to achieve contour completion using deep structural priors and extensively investigate how such a model could be implemented. Introduction The human visual system is used to seeing incomplete outlines. Our brains can effortlessly group visual elements and fragmented contours that seem to be connected to each other. This power enables us to make shapes, organize disconnected visual features, and even properties of 3D surfaces when projected on 2D planes. demonstrated how early vision may quickly complete partially-occluded objects using monocular signals. This capability of perceptual grouping has been studied in vision science for decades . Although there has been some work on perceptual grouping in the past couple of years, it has been less studied in the past decade due to the enormous progress of deep neural networks and their success in dealing with the pixel-by-pixel inference of images. Different types of lines and curves have been studied to maximize the connectivity of two broken ends in the planer contour completion problem . Different types of lines and curves have been studied to maximize the connectivity of two broken ends in the planer contour completion problem. Geometry-based constraints can be utilized to address some challenges of contour completion problems, such as smoothness and curvature consistency . However, such approaches only work for simple, smooth contours and usually fail in more complex settings. On the other hand, we currently have deep models that could easily take an incomplete image and complete the missing regions using enough training data . The amazing capability of such models especially those that are trained on different modalities with millions or billions of training data raises the question of whether we need such a large amount of training to perceive all the visual cues that are present in an image, which underlies visual perception by humans. In human vision, Gestalt psychology suggests that our brain is designed to perceive structures and patterns that are grouped by some known rules. In this work, we show that some perceptual structures can also be learned from the image itself directly using architectures that enable such learning. Earlier work has shown This is an extraordinary capability of our human brain and in this paper, we tried to see whether convolutional neural networks can show such capabilities. that some forms of perceptual grouping can be achieved using computational models, such as stochastic completion fields . This type of learning resonates with some of the Gestalt perceptual grouping principles including "proximity", "good continuation" and "similarity". In scenarios where color and/or texture are present, the cue of "similarity" helps us group regions with consistent patterns . When color and texture are present, they provide a collection of rich information for such cues. In the present article, we probe convolutional neural networks in a scenario where both are absent, and the neural network is dealing with just forms and shapes. Specifically, we explore whether the convolutional neural network architecture itself can give rise to some of these grouping cues when they are fed just contours and shapes alone. For years, neural networks have been treated as black boxes that can generalize images very well to multiple classes when there are enough training exemplars. One of the reasons that neural networks are trained on many exemplars is to avoid the problem of overfitting. On the other hand, we know that CNNs that generalize well to large classes of exemplars can easily overfit when those class labels are randomly permuted . Inspired by this observation, suggest that image priors can be learned to a large extent through a generator network architecture that is solely trained on a single image. This encouraged us to take a deeper look at what structural information can be learned from a single-shape image and whether we can reconstruct some of those perceptual grouping capabilities using a generator network. Inspired by , in this work, we adopt a novel training regime to complete shapes and contours where we use a UNet architecture with random initial weights and try to complete the contours within a single image without any training data. In our model, we imagine that the input image (i.e., the only image used to update the model's weights) is an image of fragmented contours. In this work, instead of training the model on multiple images fetched from a big image dataset, we imagine a random fixed tensor noise image as input to this model. At each iteration, the random noise tensor is inferred through our generative network and the network produces an outcome image. We introduce a novel loss function that enables this network to complete contours. This process repeats, and the weights of our network are updated gradually based on this loss function, which is an energy term defined based on the input image and the output of the network. The model will reconstruct the missing structures i.e., group fragmented contours that perceptually seem to be connected, before it fully overfits to the incomplete input image. Contributions of our work are summarized as follows: 1. In our pipeline, we propose a novel algorithm that enables us to complete contours that appear to be connected to each other in an illusory form. 2. Our model is trained on just one single query image and does not need any training data. 3. Our model does not need to know which regions of the image are masked or occluded, i.e., we remove the dependency of the algorithm on the guiding mask (a guiding mask is a mask that informs the model on where the missing regions are located at). We also introduce two metrics to produce a stopping criterion to know when to stop training before the model fully overfits to the incomplete image, i.e., we guide the model to stop when the completed image is produced. Methods Our eyes are trained to predict a missing region of an occluded object within a scene. We can easily perceive or make guesses about parts of objects or shapes that we do not necessarily see. Even when we are looking at an image, we might guess about the shape, property, or other attributes of an unknown object within a scene. Such capability extends beyond just known objects or shapes. We can look at a disconnected set of contours and guess what the connected form may look like. This capability is rooted in our prior knowledge about the world. (see Figure ). In this work, we aim to achieve a similar capability using deep generative networks. Most neural networks that we work with these days are trained with a massive amount of data and one might think that this is the only way that a neural network can obtain prior information. Authors of Deep Image Prior (DIP) suggest that the convolutional architecture can capture a fair amount of information about image distribution. They show that the hourglass architectures like UNet can show some good performances in some inverse problems such as image denoising, super-resolution, and inpainting. In this work, we focus on completing fragmented contours end-to-end just by using a single image. To be able to address this problem, we first look at a similar problem in image editing, known as image inpainting. Image inpainting is the task of completing an image where some regions of that image are covered or filtered by a mask. In image inpainting, the generative model receives a masked image with the mask that guides the algorithm to fill in those missing regions. Although in the problem of contour completion, we have a very similar goal, the additional challenge that we suffer from is that we do not necessarily have a mask that covers the regions of interest for us. For example, when we look at Figure (left), we are not provided that which regions of the image are incomplete by a guiding mask. Our brain figures this out just by looking at the form and predicting those missing regions. Inspired by the image inpainting work of DIP , we propose a novel algorithm for the contour completion problem (see Figure ), where, unlike DIP, we do not have a guiding mask to know where to fill in the missing regions of our disconnected contours. Let us assume that we are given a degraded image x I containing a fragmented contour. We propose an iterative process (see Figure ) that can connect those discontinuities and glue those fragmented pieces together as follows. We propose an hour-glass model structure (f ) that is initially set up with completely random parameters (θ 0 ) at first. Through an iterative process, we start feeding our network with a fixed random tensor noise z signal and obtain the inferred output (f (z)) from that network. We then back-propagate the difference between the inferred output and the incomplete image to the network. We then repeat this process until the difference between the generated outcome of the network (f θ (z)) and the incomplete image (x I ) gets smaller and smaller and finally overfits the incomplete image (x I ). In this work, we propose a novel error metric to backpropagate in the model and update its weights. we set the metric in a way that enables us to complete the incomplete image before it overfits the incomplete image. This is where the magic of our algorithm happens. We also propose a stopping criterion, so that when the image is complete, we no longer overfit the outcome of the model and instead produce a plausible connected set of fragmented contour pieces. As illustrated in Figure , this trajectory will pass through a complete version of the image in image space, which is close to the actual connected ground truth x gt , which we do not have access to directly. Energy Function We can model this iterative process mathematically by maximizing a posterior distribution. Let us assume that the optimal image x * that we want to achieve is on a path that connects a random tensor noise z to the incomplete image x I . With this assumption, we can eventually overfit any random tensor noise to the incomplete image x I , and we can formulate the posterior distribution of our wanted optimal image x * as follows: No Prior To better recapitulate what we want to achieve using our generative model, we solve an energy minimization problem on the parameter space of the model, rather than explicitly working with probability distributions and optimizing on x (image space). Thus, we solve an energy minimization problem that incorporates the incomplete image (x I ) and model parameters (f θ (z)): As shown in Figure , the pipeline starts from a random initialized set of parameters θ and updates those weights until it reaches a local minimum θ * . The only information provided for the network is the incomplete image x I . When we reach the optimal θ * , the completed image is obtained as x * = f θ * (z) where z is random tensor noise. In this work, we use a U-Net architecture with skip connections as the generator model. " As we mentioned previously, in this work we were inspired by an earlier work known as Deep Image Prior (DIP) . In this work, the authors suggested a mean-squared-error loss term that enables the network to compare the output of the generator to the incomplete input image: where x I is the incomplete image with missing pixels in correspondence of a binary mask m ∈ {0, 1} H×W and operator is for point-wise multiplication of two image matrices. In the inpainting tasks, the existence of a mask is essential as the algorithm needs to know where to fill in the missing area, whereas, in our work, we wanted to know whether the network can perform completion on its own without the need for the mask. In other words, is it possible for the network to predict where to fill in at the same time that it is trying to reconstruct the incomplete image through the iterative process? To answer this question, we tried to solve a much harder problem in which the mask is not provided to the model and the model is agnostic to it. To better understand how a solution could be hypothesized for this problem, we first imagine that we want to consider all the available regions in our image that could be potential places to fill in, i.e., we set the mask in the previous formula 1 to be equal to the incomplete image x I . This is problematic as the model quickly tries to fill in all white space and quickly reconstructs the incomplete image by doing so. On the other hand, we can take the inverse problem of the current problem, where the model tries to just fill in the regions that fragmented contour lives in. Taking these two at the same time, we came up with a novel loss term for energy minimization term that helps us remove the need for the mask in the case of the contour completion problem: In this term, we introduce a linear combination of the two loss terms, where one focuses on reconstructing the missing regions in the foreground, and one focuses on avoiding inpainting regions in the background. The logic behind this is that, if we assume the original image to be representative of the mask, then the model tries to reconstruct in all white regions (the foreground), and in the inverse problem we just want to reconstruct the regions that are already part of the ground truth. Stopping Criteria As shown in Figure , knowing when to stop iterating to the over-fitted model is a key to obtaining a completed shape. Therefore, we equipped our model with a criterion that uses two individual novel terms to know when to stop and output the result of the network. These two metrics expand the capability of the generator network beyond what it does currently and achieve a full end-to-end contour completion model that trains and infers on a single image of divided contour fragments. These new terms are: reconstruction_score (ρ) and overfit_score (ω). Reconstruction Score The first score that this paper suggests is the reconstruction score, i. e., we have to make sure that the model is trained enough that it can reconstruct at least the entire set of fragmented contours within the image. This is a trivial score and to compute the reconstruction_score (ρ), we apply a k-dimensional tree (KDTree) nearest-neighbor lookup to find the ratio of points in the original incomplete image (x 0 ). This score ranges from [0 − 100]. Overfit Score It is evident that the model overfits the fragmented contours. This is due to the fact that the error in our loss term is minimized as the x overfits to x I , i. e., replacing x with x I in the loss term would give us zero. As we hypothesize iterative process also produces the complete image before it overfits to the incomplete image, we can imagine that at some point the image is complete (x C ) and does not need to be fine-tuned any more to overfit to x I . We suggest a new score called overfit_score. overfit_score determines how much of the reconstructed outcome is over the number of pixels that are already in the incomplete image (x I ). To compute the overfit_score (ω), we apply a k-dimensional tree (KDTree) nearest-neighbor lookup of points in the outcome of the input image and see what portions of those points are novel and not already in the incomplete image (x I ). Similar to reconstruction_score, the overfit_score also ranges from [0 − 100]. Our goal is to maximize reconstruction_score and minimize the overfit_score, but we should consider that the minimization lower bound is data dependent and is not zero. Combined Score To be able to find the best possible set of completed contours, we combine the two and have a loop that tries to achieve close to full reconstruction and avoids over-fitting at the same time. This is what we call an "ideal" stopping point in the contour completion problem. In each run throughout all iterations, we pick an output of the network that minimizes a dissimilarity term: where δ represents our dissimilarity score. The reconstruction_score and overfit_score are obtainable given network output and the incomplete image. Ideally, we want to achieve an output image that has a reconstruction_score equal to 100 and an overfit_score of γ which is a hyperparameter that is dependent on the image and complexity of the shape. Empirically, we observed that this value is highly correlated with the distance between gaps that are observed in fragmented contours, i. e., the larger the gap results in a larger γ value. We will discuss this in more detail in the next section (see Section 3). For one sample image, we computed the two metrics reconstruction_score and overfit_score and the combined value of dissimilarity (δ) and showed how these values change (see Figure ). Our initial observations show that the reconstruction_score will increase to 100 quickly for the incomplete image indicating that the already existing fragments of the contours have been reconstructed in the output. However, as mentioned previously, we cannot solely rely on this score since we also want to minimize the overfitting. Remember that our goal is to produce an output that: a) preserves the original contours in the incomplete image and b) fills in the gaps between the fragmented contours. It is evident that overfit_score decreases throughout an iterative run of our process until it reaches zero. The dissimilarity will also decrease along with the overfit to a point, then it will increase, as the model tries to reproduce the incomplete image. This is where an ideal γ value can be picked, i.e., where to stop when the reconstruction is good but we have not done a full overfit to the incomplete image. Thus, one should pick the value of γ empirically in the scenario that the ground truth is not available, whereas, assuming that the ground truth is available, we can easily compute the best γ value. In our experiments, we tried two datasets of images with different gap sizes. We observed that the best the γ for one set of samples is ∼ 5 (the set with shorter gaps) while it is ∼ 23 for samples from the other set, i. e, the set with longer gaps (see Figure for some completed examples). Experiments and Results Performing unsupervised contour completion is a difficult task to benchmark as one can never know what fragments exactly are connected to each other in a real-world scenario. This makes the problem of contour completion a hard problem to solve. In this paper, we tried to create artificial shapes that are occluded by some masks and then tried to see if our model can regenerate the missing pieces and glue those divided contours together. To demonstrate our model's behavior, we will conduct experiments on datasets created for this task and will report on them in this section. To compare network results in different settings, we will use pixel-wise Mean Squared Error (MSE) and Intersection over Union (IoU) between the produced result of the network and unmasked ground truth data and the reconstructed image on black pixels (where contours live). Data We prepared two datasets, one labeled "Simple" and one "Complex", in accordance with the number of gaps in each shape. Both datasets contain nine different categories of shapes. In order to generate the Complex dataset, we used FlatShapeNet which is a dataset for the educational game Ariga. The dataset includes the following categories: Circle, Kite, Parallelogram, Rectangle, Rhombus, Square, Trapezoid, Triangle and Overlap. The "overlap" category contains images that are made as a mixture of two shapes that are overlapping from the previous categories. These are some standard shapes with a few gaps in simple dataset, while the complex dataset has some hand-drawn shapes with fragmented lines and more gaps that produce more variety in general. For each instance, a ground truth image is available for comparison. Most of our experiments have been conducted using the complex dataset in order to evaluate the generalization of our approach. For the analysis of how γ values should be set for each shape, we used the simple dataset as a reference. Evaluation In this section, we compare our model to the original Deep Image Prior (DIP) inpainting model. DIP's inpainting module accepts a degraded image and a binary mask corresponding to that image. In order to make a fair comparison, instead of providing a binary mask, we used the incomplete images both as input and as a mask in order to see whether it can produce a result similar to ours. For DIP, we run the iterative process for a maximum number of 2500 iterations with the U-net backbone. We used the exact same architecture and setting in our model for a fair comparison. Using our ground truth dataset images, we calculate the MSE loss between the network output and ground truth during each iteration instead of relying on our stopping mechanism described in the previous section. We then store the output with minimal loss throughout all the iterations. Finally, we select the best output among all iterations, report the MSE and IoU with the ground truth, and save the iteration number which resulted in the lowest MSE. Table compares the results that are obtained using the DIP method, the DSP method (ours), and the difference between raw images and the ground truth. We have presented the average MSE-loss, average IoU, and the average number of iterations for the best output for different methods. As can be seen from the table, our model improves both MSE and IoU between the incomplete image and ground truth in fewer iterations. The DIP method can neither generate a better result than the raw image nor provide stopping criteria to prevent overfitting. We provide a more detailed analysis of this result in Figure . As results show, our algorithm not only provides a much faster convergence but also consistently provides a better-completed image (consistently less MSE loss and better IoU), whereas it is challenging for the DIP method to accomplish better results without a guiding mask. We compare MSE loss between the degraded raw images that these algorithms started with (shown in blue) (a) Mean Squared Error Loss: we clearly see that for almost all images, DSP (green) achieves a lower MSE than the incomplete images (blue) whereas, the DIP completed images either do not improve the MSE or even worsen that for the incomplete images. Note that, the MSE is computed to an available ground truth image hidden from our methods (the lower is better). (b) Intersection Over Union: here, we are looking at the IoU metric that specifies the amount of intersection over the union between the obtained images and the ground truth data. Again, we see that DSP produces images that are much closer to the ground truth (in most cases) whereas the DIP can not achieve a similar result. While we see few DIP completed images produce a better IoU than the degraded images (in terms of IoU), most of them are worse than the starting image (the higher is better). (c) The number of iterations that are needed for each algorithm to obtain its best results. Here, we see that DSP can quickly produce the best outcome with the least MSE loss whereas the DIP algorithm's best results are when we run the iterative process for more iterations (the lower is better). Correlation of γ with the Gap Size To better understand the γ parameter of our combined score, we conducted the following experiment. A total of 12000 samples were obtained by merging all of our images from the two datasets, simple and complex. As we have access to the ground truth for each degraded image in the combined dataset, we can easily calculate reconstruction_score and overfit_score for each degraded-ground truth pair. As expected, we obtain a reconstruction_score of 100 for all samples, but the overfit_score varies among them. Intuitively, we hypothesized that an optimal value of overfit score should be intertwined with the total area of gaps. To test this hypothesis, we did the following experiment. We first define a function φ(x) which takes a binary, black and white image x and returns the number of black pixels in it. Then we define a gap term as follows: where x I is the incomplete image and x gt is the ground truth. In this case, gap indicates the total area of the gap with respect to the whole shape. We found out that this term and the best result have a correlation of 97.43%. This indicates that the value of γ is highly correlated with the gap size, that is something expected in a way. Effect of α We conducted additional experiments concerning how α affects the quality of reconstruction. In the previous section, we defined Equation 2 as the loss term that guides our iterative process. The term α specifies the amount of emphasis the model should place on reconstructing missing regions, rather than filling in fragmented contours. A lower α indicates a better grouping quality, as shown in Equation . However, we will not achieve completion if we remove the first term completely from the loss by setting α = 0. Therefore, the first term should be kept, but its weight should be very low in order to achieve a good completion. On the other hand, if we set α = 1 and omit the second term, we lose the contour completion regularization term and obtain the same output as a vanilla deep image prior, which does not complete shapes. Effect of Receptive Field Size To better understand the effect of receptive field size on our algorithm, we test the following hypothesis: can models with bigger receptive field size complete shapes with bigger gaps? In Table , we report showing the results of this experiment. As we can see, the bigger the receptive field size, the more complete shapes we can reconstruct using DSP. Implementation Details As shown in Figure , we use a model with 5 layers and 128 channels for downsampling and upsampling convolutions, and 64 channels for skip convolutions. The upsampling and downsampling modules use 3 × 3 filters, while the skip module uses 1 × 1 filters. In the upsample part of the network, the nearest neighbor algorithm is used. We used 256 × 256 images with three channels in all of our experiments. In training, we use the MSE loss between the degraded image and the output of the network, and we optimize the loss using the ADAM optimizer and a learning rate equal to 0.01 . In our experiments, we also used α = 0.15 as an optimal proportion coefficient for reconstruction loss. Conclusion In this work, we introduced a novel framework for contour completion using deep structure priors (DSP). This work offers a novel notion of a maskless grouping of fragmented contours. In our proposed framework, we introduced a novel loss metric that does not require a strict definition of the mask. Instead, it lets the model learn the perceivable illusory contours and connects those fragmented pieces using a generator network that is solely trained on just the single incomplete input image. Our model does not require any pre-training which demonstrates that the convolutional architecture of the hour-glass model is able to connect disconnected contours. We present an extended set of experiments that show the capability of our algorithm. We investigate the effect of each parameter introduced in our algorithm separately and show how one could possibly achieve the best result for their problem using this model. In future work, we plan to extend this model and try to see how it performs with real images. In particular, we want to determine whether we can inpaint real-world photographs while retaining perceptually aware scene structures. The importance of shape in perception by deep neural networks has been highlighted in many adversarial examples to appearance-based networks . The outcome of this work has strong potential to impact the designing and implementation of models that are robust to such perturbations. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How does the receptive field size affect the completion of shapes? Only give me the answer and do not output any other words.
Answer:
[ "Bigger receptive field size leads to more successful shape completion." ]
276
6,159
single_doc_qa
03edf05a969f4206aa09f89cd5d9a114
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What were their results on the new dataset? Only give me the answer and do not output any other words.
Answer:
[ "average recipe-level coherence scores of 1.78-1.82, human evaluators preferred personalized model outputs to baseline 63% of the time" ]
276
3,828
single_doc_qa
3db6f803bafe440499ec3aec102445cf
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Pre-trained models BIBREF0, BIBREF1 have received much of attention recently thanks to their impressive results in many down stream NLP tasks. Additionally, multilingual pre-trained models enable many NLP applications for other languages via zero-short cross-lingual transfer. Zero-shot cross-lingual transfer has shown promising results for rapidly building applications for low resource languages. BIBREF2 show the potential of multilingual-BERT BIBREF0 in zero-shot transfer for a large number of languages from different language families on five NLP tasks, namely, natural language inference, document classification, named entity recognition, part-of-speech tagging, and dependency parsing. Although multilingual models are an important ingredient for enhancing language technology in many languages, recent research on improving pre-trained models puts much emphasis on English BIBREF3, BIBREF4, BIBREF5. The current state of affairs makes it difficult to translate advancements in pre-training from English to non-English languages. To our best knowledge, there are only three available multilingual pre-trained models to date: (1) the multilingual-BERT (mBERT) that supports 104 languages, (2) cross-lingual language model BIBREF6 that supports 100 languages, and (3) Language Agnostic SEntence Representations BIBREF7 that supports 93 languages. Among the three models, LASER is based on neural machine translation approach and strictly requires parallel data to train. Do multilingual models always need to be trained from scratch? Can we transfer linguistic knowledge learned by English pre-trained models to other languages? In this work, we develop a technique to rapidly transfer an existing pre-trained model from English to other languages in an energy efficient way BIBREF8. As the first step, we focus on building a bilingual language model (LM) of English and a target language. Starting from a pre-trained English LM, we learn the target language specific parameters (i.e., word embeddings), while keeping the encoder layers of the pre-trained English LM fixed. We then fine-tune both English and target model to obtain the bilingual LM. We apply our approach to autoencoding language models with masked language model objective and show the advantage of the proposed approach in zero-shot transfer. Our main contributions in this work are: We propose a fast adaptation method for obtaining a bilingual BERT$_{\textsc {base}}$ of English and a target language within a day using one Tesla V100 16GB GPU. We evaluate our bilingual LMs for six languages on two zero-shot cross-lingual transfer tasks, namely natural language inference BIBREF9 and universal dependency parsing. We show that our models offer competitive performance or even better that mBERT. We illustrate that our bilingual LMs can serve as an excellent feature extractor in supervised dependency parsing task. Bilingual Pre-trained LMs We first provide some background of pre-trained language models. Let $_e$ be English word-embeddings and $\Psi ()$ be the Transformer BIBREF10 encoder with parameters $$. Let $_{w_i}$ denote the embedding of word $w_i$ (i.e., $_{w_i} = _e[w_1]$). We omit positional embeddings and bias for clarity. A pre-trained LM typically performs the following computations: (i) transform a sequence of input tokens to contextualized representations $[_{w_1},\dots ,_{w_n}] = \Psi (_{w_1}, \dots , _{w_n}; )$, and (ii) predict an output word $y_i$ at $i^{\text{th}}$ position $p(y_i | _{w_i}) \propto \exp (_{w_i}^\top _{y_i})$. Autoencoding LM BIBREF0 corrupts some input tokens $w_i$ by replacing them with a special token [MASK]. It then predicts the original tokens $y_i = w_i$ from the corrupted tokens. Autoregressive LM BIBREF3 predicts the next token ($y_i = w_{i+1}$) given all the previous tokens. The recently proposed XLNet model BIBREF5 is an autoregressive LM that factorizes output with all possible permutations, which shows empirical performance improvement over GPT-2 due to the ability to capture bidirectional context. Here we assume that the encoder performs necessary masking with respect to each training objective. Given an English pre-trained LM, we wish to learn a bilingual LM for English and a given target language $f$ under a limited computational resource budget. To quickly build a bilingual LM, we directly adapt the English pre-traind model to the target model. Our approach consists of three steps. First, we initialize target language word-embeddings $_f$ in the English embedding space such that embeddings of a target word and its English equivalents are close together (§SECREF8). Next, we create a target LM from the target embeddings and the English encoder $\Psi ()$. We then fine-tune target embeddings while keeping $\Psi ()$ fixed (§SECREF14). Finally, we construct a bilingual LM of $_e$, $_f$, and $\Psi ()$ and fine-tune all the parameters (§SECREF15). Figure FIGREF7 illustrates the last two steps in our approach. Bilingual Pre-trained LMs ::: Initializing Target Embeddings Our approach to learn the initial foreign word embeddings $_f \in ^{|V_f| \times d}$ is based on the idea of mapping the trained English word embeddings $_e \in ^{|V_e| \times d}$ to $_f$ such that if a foreign word and an English word are similar in meaning then their embeddings are similar. Borrowing the idea of universal lexical sharing from BIBREF11, we represent each foreign word embedding $_f[i] \in ^d$ as a linear combination of English word embeddings $_e[j] \in ^d$ where $_i\in ^{|V_e|}$ is a sparse vector and $\sum _j^{|V_e|} \alpha _{ij} = 1$. In this step of initializing foreign embeddings, having a good estimation of $$ could speed of the convergence when tuning the foreign model and enable zero-shot transfer (§SECREF5). In the following, we discuss how to estimate $_i\;\forall i\in \lbrace 1,2, \dots , |V_f|\rbrace $ under two scenarios: (i) we have parallel data of English-foreign, and (ii) we only rely on English and foreign monolingual data. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Parallel Corpus Given an English-foreign parallel corpus, we can estimate word translation probability $p(e\,|\,f)$ for any (English-foreign) pair $(e, f)$ using popular word-alignment BIBREF12 toolkits such as fast-align BIBREF13. We then assign: Since $_i$ is estimated from word alignment, it is a sparse vector. Bilingual Pre-trained LMs ::: Initializing Target Embeddings ::: Learning from Monolingual Corpus For low resource languages, parallel data may not be available. In this case, we rely only on monolingual data (e.g., Wikipedias). We estimate word translation probabilities from word embeddings of the two languages. Word vectors of these languages can be learned using fastText BIBREF14 and then are aligned into a shared space with English BIBREF15, BIBREF16. Unlike learning contextualized representations, learning word vectors is fast and computationally cheap. Given the aligned vectors $\bar{}_f$ of foreign and $\bar{}_e$ of English, we calculate the word translation matrix $\in ^{|V_f|\times |V_e|}$ as Here, we use $\mathrm {sparsemax}$ BIBREF17 instead of softmax. Sparsemax is a sparse version of softmax and it puts zero probabilities on most of the word in the English vocabulary except few English words that are similar to a given foreign word. This property is desirable in our approach since it leads to a better initialization of the foreign embeddings. Bilingual Pre-trained LMs ::: Fine-tuning Target Embeddings After initializing foreign word-embeddings, we replace English word-embeddings in the English pre-trained LM with foreign word-embeddings to obtain the foreign LM. We then fine-tune only foreign word-embeddings on monolingual data. The training objective is the same as the training objective of the English pre-trained LM (i.e., masked LM for BERT). Since the trained encoder $\Psi ()$ is good at capturing association, the purpose of this step is to further optimize target embeddings such that the target LM can utilized the trained encoder for association task. For example, if the words Albert Camus presented in a French input sequence, the self-attention in the encoder more likely attends to words absurde and existentialisme once their embeddings are tuned. Bilingual Pre-trained LMs ::: Fine-tuning Bilingual LM We create a bilingual LM by plugging foreign language specific parameters to the pre-trained English LM (Figure FIGREF7). The new model has two separate embedding layers and output layers, one for English and one for foreign language. The encoder layer in between is shared. We then fine-tune this model using English and foreign monolingual data. Here, we keep tuning the model on English to ensure that it does not forget what it has learned in English and that we can use the resulting model for zero-shot transfer (§SECREF3). In this step, the encoder parameters are also updated so that in can learn syntactic aspects (i.e., word order, morphological agreement) of the target languages. Zero-shot Experiments We build our bilingual LMs, named RAMEN, starting from BERT$_{\textsc {base}}$, BERT$_{\textsc {large}}$, RoBERTa$_{\textsc {base}}$, and RoBERTa$_{\textsc {large}}$ pre-trained models. Using BERT$_{\textsc {base}}$ allows us to compare the results with mBERT model. Using BERT$_{\textsc {large}}$ and RoBERTa allows us to investigate whether the performance of the target LM correlates with the performance of the source LM. We evaluate our models on two cross-lingual zero-shot tasks: (1) Cross-lingual Natural Language Inference (XNLI) and (2) dependency parsing. Zero-shot Experiments ::: Data We evaluate our approach for six target languages: French (fr), Russian (ru), Arabic (ar), Chinese (zh), Hindi (hi), and Vietnamese (vi). These languages belong to four different language families. French, Russian, and Hindi are Indo-European languages, similar to English. Arabic, Chinese, and Vietnamese belong to Afro-Asiatic, Sino-Tibetan, and Austro-Asiatic family respectively. The choice of the six languages also reflects different training conditions depending on the amount of monolingual data. French and Russian, and Arabic can be regarded as high resource languages whereas Hindi has far less data and can be considered as low resource. For experiments that use parallel data to initialize foreign specific parameters, we use the same datasets in the work of BIBREF6. Specifically, we use United Nations Parallel Corpus BIBREF18 for en-ru, en-ar, en-zh, and en-fr. We collect en-hi parallel data from IIT Bombay corpus BIBREF19 and en-vi data from OpenSubtitles 2018. For experiments that use only monolingual data to initialize foreign parameters, instead of training word-vectors from the scratch, we use the pre-trained word vectors from fastText BIBREF14 to estimate word translation probabilities (Eq. DISPLAY_FORM13). We align these vectors into a common space using orthogonal Procrustes BIBREF20, BIBREF15, BIBREF16. We only use identical words between the two languages as the supervised signal. We use WikiExtractor to extract extract raw sentences from Wikipedias as monolingual data for fine-tuning target embeddings and bilingual LMs (§SECREF15). We do not lowercase or remove accents in our data preprocessing pipeline. We tokenize English using the provided tokenizer from pre-trained models. For target languages, we use fastBPE to learn 30,000 BPE codes and 50,000 codes when transferring from BERT and RoBERTa respectively. We truncate the BPE vocabulary of foreign languages to match the size of the English vocabulary in the source models. Precisely, the size of foreign vocabulary is set to 32,000 when transferring from BERT and 50,000 when transferring from RoBERTa. We use XNLI dataset BIBREF9 for classification task and Universal Dependencies v2.4 BIBREF21 for parsing task. Since a language might have more than one treebank in Universal Dependencies, we use the following treebanks: en_ewt (English), fr_gsd (French), ru_syntagrus (Russian) ar_padt (Arabic), vi_vtb (Vietnamese), hi_hdtb (Hindi), and zh_gsd (Chinese). Zero-shot Experiments ::: Data ::: Remark on BPE BIBREF22 show that sharing subwords between languages improves alignments between embedding spaces. BIBREF2 observe a strong correlation between the percentage of overlapping subwords and mBERT's performances for cross-lingual zero-shot transfer. However, in our current approach, subwords between source and target are not shared. A subword that is in both English and foreign vocabulary has two different embeddings. Zero-shot Experiments ::: Estimating translation probabilities Since pre-trained models operate on subword level, we need to estimate subword translation probabilities. Therefore, we subsample 2M sentence pairs from each parallel corpus and tokenize the data into subwords before running fast-align BIBREF13. Estimating subword translation probabilities from aligned word vectors requires an additional processing step since the provided vectors from fastText are not at subword level. We use the following approximation to obtain subword vectors: the vector $_s$ of subword $s$ is the weighted average of all the aligned word vectors $_{w_i}$ that have $s$ as an subword where $p(w_j)$ is the unigram probability of word $w_j$ and $n_s = \sum _{w_j:\, s\in w_j} p(w_j)$. We take the top 50,000 words in each aligned word-vectors to compute subword vectors. In both cases, not all the words in the foreign vocabulary can be initialized from the English word-embeddings. Those words are initialized randomly from a Gaussian $\mathcal {N}(0, {1}{d^2})$. Zero-shot Experiments ::: Hyper-parameters In all the experiments, we tune RAMEN$_{\textsc {base}}$ for 175,000 updates and RAMEN$_{\textsc {large}}$ for 275,000 updates where the first 25,000 updates are for language specific parameters. The sequence length is set to 256. The mini-batch size are 64 and 24 when tuning language specific parameters using RAMEN$_{\textsc {base}}$ and RAMEN$_{\textsc {large}}$ respectively. For tuning bilingual LMs, we use a mini-batch size of 64 for RAMEN$_{\textsc {base}}$ and 24 for RAMEN$_{\textsc {large}}$ where half of the batch are English sequences and the other half are foreign sequences. This strategy of balancing mini-batch has been used in multilingual neural machine translation BIBREF23, BIBREF24. We optimize RAMEN$_{\textsc {base}}$ using Lookahead optimizer BIBREF25 wrapped around Adam with the learning rate of $10^{-4}$, the number of fast weight updates $k=5$, and interpolation parameter $\alpha =0.5$. We choose Lookahead optimizer because it has been shown to be robust to the initial parameters of the based optimizer (Adam). For Adam optimizer, we linearly increase the learning rate from $10^{-7}$ to $10^{-4}$ in the first 4000 updates and then follow an inverse square root decay. All RAMEN$_{\textsc {large}}$ models are optimized with Adam due to memory limit. When fine-tuning RAMEN on XNLI and UD, we use a mini-batch size of 32, Adam's learning rate of $10^{-5}$. The number of epochs are set to 4 and 50 for XNLI and UD tasks respectively. All experiments are carried out on a single Tesla V100 16GB GPU. Each RAMEN$_{\textsc {base}}$ model is trained within a day and each RAMEN$_{\textsc {large}}$ is trained within two days. Results In this section, we present the results of out models for two zero-shot cross lingual transfer tasks: XNLI and dependency parsing. Results ::: Cross-lingual Natural Language Inference Table TABREF32 shows the XNLI test accuracy. For reference, we also include the scores from the previous work, notably the state-of-the-art system XLM BIBREF6. Before discussing the results, we spell out that the fairest comparison in this experiment is the comparison between mBERT and RAMEN$_{\textsc {base}}$+BERT trained with monolingual only. We first discuss the transfer results from BERT. Initialized from fastText vectors, RAMEN$_{\textsc {base}}$ slightly outperforms mBERT by 1.9 points on average and widen the gap of 3.3 points on Arabic. RAMEN$_{\textsc {base}}$ gains extra 0.8 points on average when initialized from parallel data. With triple number of parameters, RAMEN$_{\textsc {large}}$ offers an additional boost in term of accuracy and initialization with parallel data consistently improves the performance. It has been shown that BERT$_{\textsc {large}}$ significantly outperforms BERT$_{\textsc {base}}$ on 11 English NLP tasks BIBREF0, the strength of BERT$_{\textsc {large}}$ also shows up when adapted to foreign languages. Transferring from RoBERTa leads to better zero-shot accuracies. With the same initializing condition, RAMEN$_{\textsc {base}}$+RoBERTa outperforms RAMEN$_{\textsc {base}}$+BERT on average by 2.9 and 2.3 points when initializing from monolingual and parallel data respectively. This result show that with similar number of parameters, our approach benefits from a better English pre-trained model. When transferring from RoBERTa$_{\textsc {large}}$, we obtain state-of-the-art results for five languages. Currently, RAMEN only uses parallel data to initialize foreign embeddings. RAMEN can also exploit parallel data through translation objective proposed in XLM. We believe that by utilizing parallel data during the fine-tuning of RAMEN would bring additional benefits for zero-shot tasks. We leave this exploration to future work. In summary, starting from BERT$_{\textsc {base}}$, our approach obtains competitive bilingual LMs with mBERT for zero-shot XNLI. Our approach shows the accuracy gains when adapting from a better pre-trained model. Results ::: Universal Dependency Parsing We build on top of RAMEN a graph-based dependency parser BIBREF27. For the purpose of evaluating the contextual representations learned by our model, we do not use part-of-speech tags. Contextualized representations are directly fed into Deep-Biaffine layers to predict arc and label scores. Table TABREF34 presents the Labeled Attachment Scores (LAS) for zero-shot dependency parsing. We first look at the fairest comparison between mBERT and monolingually initialized RAMEN$_{\textsc {base}}$+BERT. The latter outperforms the former on five languages except Arabic. We observe the largest gain of +5.2 LAS for French. Chinese enjoys +3.1 LAS from our approach. With similar architecture (12 or 24 layers) and initialization (using monolingual or parallel data), RAMEN+RoBERTa performs better than RAMEN+BERT for most of the languages. Arabic and Hindi benefit the most from bigger models. For the other four languages, RAMEN$_{\textsc {large}}$ renders a modest improvement over RAMEN$_{\textsc {base}}$. Analysis ::: Impact of initialization Initializing foreign embeddings is the backbone of our approach. A good initialization leads to better zero-shot transfer results and enables fast adaptation. To verify the importance of a good initialization, we train a RAMEN$_{\textsc {base}}$+RoBERTa with foreign word-embeddings are initialized randomly from $\mathcal {N}(0, {1}{d^2})$. For a fair comparison, we use the same hyper-parameters in §SECREF27. Table TABREF36 shows the results of XNLI and UD parsing of random initialization. In comparison to the initialization using aligned fastText vectors, random initialization decreases the zero-shot performance of RAMEN$_{\textsc {base}}$ by 15.9% for XNLI and 27.8 points for UD parsing on average. We also see that zero-shot parsing of SOV languages (Arabic and Hindi) suffers random initialization. Analysis ::: Are contextual representations from RAMEN also good for supervised parsing? All the RAMEN models are built from English and tuned on English for zero-shot cross-lingual tasks. It is reasonable to expect RAMENs do well in those tasks as we have shown in our experiments. But are they also a good feature extractor for supervised tasks? We offer a partial answer to this question by evaluating our model for supervised dependency parsing on UD datasets. We used train/dev/test splits provided in UD to train and evaluate our RAMEN-based parser. Table TABREF38 summarizes the results (LAS) of our supervised parser. For a fair comparison, we choose mBERT as the baseline and all the RAMEN models are initialized from aligned fastText vectors. With the same architecture of 12 Transformer layers, RAMEN$_{\textsc {base}}$+BERT performs competitive to mBERT and outshines mBERT by +1.2 points for Vietnamese. The best LAS results are obtained by RAMEN$_{\textsc {large}}$+RoBERTa with 24 Transformer layers. Overall, our results indicate the potential of using contextual representations from RAMEN for supervised tasks. Analysis ::: How does linguistic knowledge transfer happen through each training stages? We evaluate the performance of RAMEN+RoBERTa$_{\textsc {base}}$ (initialized from monolingual data) at each training steps: initialization of word embeddings (0K update), fine-tuning target embeddings (25K), and fine-tuning the model on both English and target language (at each 25K updates). The results are presented in Figure FIGREF40. Without fine-tuning, the average accuracy of XLNI is 39.7% for a three-ways classification task, and the average LAS score is 3.6 for dependency parsing. We see the biggest leap in the performance after 50K updates. While semantic similarity task profits significantly at 25K updates of the target embeddings, syntactic task benefits with further fine-tuning the encoder. This is expected since the target languages might exhibit different syntactic structures than English and fine-tuning encoder helps to capture language specific structures. We observe a substantial gain of 19-30 LAS for all languages except French after 50K updates. Language similarities have more impact on transferring syntax than semantics. Without tuning the English encoder, French enjoys 50.3 LAS for being closely related to English, whereas Arabic and Hindi, SOV languages, modestly reach 4.2 and 6.4 points using the SVO encoder. Although Chinese has SVO order, it is often seen as head-final while English is strong head-initial. Perhaps, this explains the poor performance for Chinese. Limitations While we have successfully adapted autoencoding pre-trained LMs from English to other languages, the question whether our approach can also be applied for autoregressive LM such as XLNet still remains. We leave the investigation to future work. Conclusions In this work, we have presented a simple and effective approach for rapidly building a bilingual LM under a limited computational budget. Using BERT as the starting point, we demonstrate that our approach produces better than mBERT on two cross-lingual zero-shot sentence classification and dependency parsing. We find that the performance of our bilingual LM, RAMEN, correlates with the performance of the original pre-trained English models. We also find that RAMEN is also a powerful feature extractor in supervised dependency parsing. Finally, we hope that our work sparks of interest in developing fast and effective methods for transferring pre-trained English models to other languages. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What metrics are used for evaluation? Only give me the answer and do not output any other words.
Answer:
[ "translation probabilities, Labeled Attachment Scores (LAS)", "accuracy, Labeled Attachment Scores (LAS)" ]
276
5,176
single_doc_qa
9264340d3cc7439ea76477eac39492a1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction and Background Many reinforcement learning algorithms are designed for relatively small discrete or continuous action spaces and so have trouble scaling. Text-adventure games—or interaction fictions—are simulations in which both an agents' state and action spaces are in textual natural language. An example of a one turn agent interaction in the popular text-game Zork1 can be seen in Fig. FIGREF1. Text-adventure games provide us with multiple challenges in the form of partial observability, commonsense reasoning, and a combinatorially-sized state-action space. Text-adventure games are structured as long puzzles or quests, interspersed with bottlenecks. The quests can usually be completed through multiple branching paths. However, games can also feature one or more bottlenecks. Bottlenecks are areas that an agent must pass through in order to progress to the next section of the game regardless of what path the agent has taken to complete that section of the quest BIBREF0. In this work, we focus on more effectively exploring this space and surpassing these bottlenecks—building on prior work that focuses on tackling the other problems. Formally, we use the definition of text-adventure games as seen in BIBREF1 and BIBREF2. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple of $\langle S,T,A,\Omega , O,R, \gamma \rangle $ representing the set of environment states, mostly deterministic conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. For our purposes, understanding the exact state and action spaces we use in this work is critical and so we define each of these in relative depth. Action-Space. To solve Zork1, the cannonical text-adventure games, requires the generation of actions consisting of up to five-words from a relatively modest vocabulary of 697 words recognized by the game’s parser. This results in $\mathcal {O}(697^5)={1.64e14}$ possible actions at every step. To facilitate text-adventure game playing, BIBREF2 introduce Jericho, a framework for interacting with text-games. They propose a template-based action space in which the agent first selects a template, consisting of an action verb and preposition, and then filling that in with relevant entities $($e.g. $[get]$ $ [from] $ $)$. Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size $\mathcal {O}(237 \times 697^2)={1.15e8}$. This space is still far larger than most used by previous approaches applying reinforcement learning to text-based games. State-Representation. Prior work has shown that knowledge graphs are effective in terms of dealing with the challenges of partial observability $($BIBREF3 BIBREF3; BIBREF4$)$. A knowledge graph is a set of 3-tuples of the form $\langle subject, relation, object \rangle $. These triples are extracted from the observations using Stanford's Open Information Extraction (OpenIE) BIBREF5. Human-made text-adventure games often contain relatively complex semi-structured information that OpenIE is not designed to parse and so they add additional rules to ensure that the correct information is parsed. The graph itself is more or less a map of the world, with information about objects' affordances and attributes linked to the rooms that they are place in a map. The graph also makes a distinction with respect to items that are in the agent's possession or in their immediate surrounding environment. An example of what the knowledge graph looks like and specific implementation details can be found in Appendix SECREF14. BIBREF6 introduce the KG-A2C, which uses a knowledge graph based state-representation to aid in the section of actions in a combinatorially-sized action-space—specifically they use the knowledge graph to constrain the kinds of entities that can be filled in the blanks in the template action-space. They test their approach on Zork1, showing the combination of the knowledge graph and template action selection resulted in improvements over existing methods. They note that their approach reaches a score of 40 which corresponds to a bottleneck in Zork1 where the player is eaten by a “grue” (resulting in negative reward) if the player has not first lit a lamp. The lamp must be lit many steps after first being encountered, in a different section of the game; this action is necessary to continue exploring but doesn’t immediately produce any positive reward. That is, there is a long term dependency between actions that is not immediately rewarded, as seen in Figure FIGREF1. Others using artificially constrained action spaces also report an inability to pass through this bottleneck BIBREF7, BIBREF8. They pose a significant challenge for these methods because the agent does not see the correct action sequence to pass the bottleneck enough times. This is in part due to the fact that for that sequence to be reinforced, the agent needs to reach the next possible reward beyond the bottleneck. More efficient exploration strategies are required to pass bottlenecks. Our contributions are two-fold. We first introduce a method that detects bottlenecks in text-games using the overall reward gained and the knowledge graph state. This method freezes the policy used to reach the bottleneck and restarts the training from there on out, additionally conducting a backtracking search to ensure that a sub-optimal policy has not been frozen. The second contribution explore how to leverage knowledge graphs to improve existing exploration algorithms for dealing with combinatorial action-spaces such as Go-Explore BIBREF9. We additionally present a comparative ablation study analyzing the performance of these methods on the popular text-game Zork1. Exploration Methods In this section, we describe methods to explore combinatorially sized action spaces such as text-games—focusing especially on methods that can deal with their inherent bottleneck structure. We first describe our method that explicitly attempts to detect bottlenecks and then describe how an exploration algorithm such as Go Explore BIBREF9 can leverage knowledge graphs. KG-A2C-chained An example of a bottleneck can be seen in Figure FIGREF1. We extend the KG-A2C algorithm as follows. First, we detect bottlenecks as states where the agent is unable to progress any further. We set a patience parameter and if the agent has not seen a higher score in patience steps, the agent assumes it has been limited by a bottleneck. Second, when a bottleneck is found, we freeze the policy that gets the agent to the state with the highest score. The agent then begins training a new policy from that particular state. Simply freezing the policy that led to the bottleneck, however, can potentially result in a policy one that is globally sub-optimal. We therefore employ a backtracking strategy that restarts exploration from each of the $n$ previous steps—searching for a more optimal policy that reaches that bottleneck. At each step, we keep track of a buffer of $n$ states and admissible actions that led up to that locally optimal state. We force the agent to explore from this state to attempt to drive it out of the local optima. If it is further unable to find itself out of this local optima, we refresh the training process again, but starting at the state immediately before the agent reaches the local optima. If this continues to fail, we continue to iterate through this buffer of seen states states up to that local optima until we either find a more optimal state or we run out of states to refresh from, in which we terminate the training algorithm. KG-A2C-Explore Go-Explore BIBREF9 is an algorithm that is designed to keep track of sub-optimal and under-explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the first to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are defined as those states when explored from will likely result in higher reward trajectories. Since the text games we are dealing with are mostly deterministic, with the exception of Zork in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to find an optimal policy. BIBREF10 look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld BIBREF1. Instead of training a policy network in parallel to generate actions used for exploration, they use a small set of “admissible actions”—actions guaranteed to change the world state at any given step during Phase 1—to explore and find high reward trajectories. This space of actions is relatively small (of the order of $10^2$ per step) and so finding high reward trajectories in larger action-spaces such as in Zork would be infeasible Go-Explore maintains an archive of cells—defined as a set of states that map to a single representation—to keep track of promising states. BIBREF9 simply encodes each cell by keeping track of the agent's position and BIBREF10 use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the KG-A2C network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). The KG-A2C will run for a number of steps, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training the KG-A2C at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, KG-A2C will benefit from the heuristics of selecting preferred cells and be trained on promising states more often. Evaluation We compare our two exploration strategies to the following baselines and ablations: KG-A2C This is the exact same method presented in BIBREF6 with no modifications. A2C Represents the same approach as KG-A2C but with all the knowledge graph components removed. The state representation is text only encoded using recurrent networks. A2C-chained Is a variation on KG-A2C-chained where we use our policy chaining approach with the A2C method to train the agent instead of KG-A2C. A2C-Explore Uses A2C in addition to the exploration strategy seen in KG-A2C-Explore. The cell representations here are defined in terms of the recurrent network based encoding of the textual observation. Figure FIGREF10 shows that agents utilizing knowledge-graphs in addition to either enhanced exploration method far outperform the baseline A2C and KG-A2C. KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40, whereas A2C-Explore gets to the bottleneck but cannot surpass it. There are a couple of key insights that can be drawn from these results The first is that the knowledge graph appears to be critical; it is theorized to help with partial observability. However the knowledge graph representation isn't sufficient in that the knowledge graph representation without enhanced exploration methods cannot surpass the bottleneck. A2C-chained—which explores without a knowledge graph—fails to even outperform the baseline A2C. We hypothesize that this is due to the knowledge graph aiding implicitly in the sample efficiency of bottleneck detection and subsequent exploration. That is, exploring after backtracking from a potentially detected bottleneck is much more efficient in the knowledge graph based agent. The Go-Explore based exploration algorithm sees less of a difference between agents. A2C-Explore converges more quickly, but to a lower reward trajectory that fails to pass the bottleneck, whereas KG-A2C-Explore takes longer to reach a similar reward but consistently makes it through the bottleneck. The knowledge graph cell representation appears to thus be a better indication of what a promising state is as opposed to just the textual observation. Comparing the advanced exploration methods when using the knowledge graph, we see that both agents successfully pass the bottleneck corresponding to entering the cellar and lighting the lamp and reach comparable scores within a margin of error. KG-A2C-chained is significantly more sample efficient and converges faster. We can infer that chaining policies by explicitly detecting bottlenecks lets us pass it more quickly than attempting to find promising cell representations with Go-Explore. This form of chained exploration with backtracking is particularly suited to sequential decision making problems that can be represented as acyclic directed graphs as in Figure FIGREF1. Appendix ::: Zork1 Zork1 is one of the first text-adventure games and heavily influences games released later in terms of narrative style and game structure. It is a dungeon crawler where the player must explore a vast world and collect a series of treasures. It was identified by BIBREF2 as a moonshot game and has been the subject of much work in leaning agents BIBREF12, BIBREF7, BIBREF11, BIBREF8. Rewards are given to the player when they collect treasures as well as when important intermediate milestones needed to further explore the world are passed. Figure FIGREF15 and Figure FIGREF1 show us a map of the world of Zork1 and the corresponding quest structure. The bottleneck seen at a score of around 40 is when the player first enters the cellar on the right side of the map. The cellar is dark and you need to immediately light the lamp to see anything. Attempting to explore the cellar in the dark results in you being instantly killed by a monster known as a “grue”. Appendix ::: Knowledge Graph Rules We make no changes from the graph update rules used by BIBREF6. Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singular and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command $examine$ $OBJ$. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inventory items are linked to the “you” node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. $\langle kitchen,down,cellar \rangle $ when the agent performs $go$ $down$ when in the kitchen to move to the cellar. Appendix ::: Hyperparameters Hyperparameters used for our agents are given below. Patience and buffer size are used for the policy chaining method as described in Section SECREF2. Cell step size is a parameter used for Go-Explore and describes how many steps are taken when exploring in a given cell state. Base hyperparameters for KG-A2C are taken from BIBREF6 and the same parameters are used for A2C. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the results from these proposed strategies? Only give me the answer and do not output any other words.
Answer:
[ "Reward of 11.8 for the A2C-chained model, 41.8 for the KG-A2C-chained model, 40 for A2C-Explore and 44 for KG-A2C-Explore.", "KG-A2C-chained and KG-A2C-Explore both pass the bottleneck of a score of 40" ]
276
3,141
single_doc_qa
1c098e6d54304070849a215c14785801
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Assembling training corpora of annotated natural language examples in specialized domains such as biomedicine poses considerable challenges. Experts with the requisite domain knowledge to perform high-quality annotation tend to be expensive, while lay annotators may not have the necessary knowledge to provide high-quality annotations. A practical approach for collecting a sufficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk). However, crowd workers in general are likely to provide noisy annotations BIBREF0 , BIBREF1 , BIBREF2 , an issue exacerbated by the technical nature of specialized content. Some of this noise may reflect worker quality and can be modeled BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 , but for some instances lay people may simply lack the domain knowledge to provide useful annotation. In this paper we report experiments on the EBM-NLP corpus comprising crowdsourced annotations of medical literature BIBREF5 . We operationalize the concept of annotation difficulty and show how it can be exploited during training to improve information extraction models. We then obtain expert annotations for the abstracts predicted to be most difficult, as well as for a similar number of randomly selected abstracts. The annotation of highly specialized data and the use of lay and expert annotators allow us to examine the following key questions related to lay and expert annotations in specialized domains: Can we predict item difficulty? We define a training instance as difficult if a lay annotator or an automated model disagree on its labeling. We show that difficulty can be predicted, and that it is distinct from inter-annotator agreement. Further, such predictions can be used during training to improve information extraction models. Are there systematic differences between expert and lay annotations? We observe decidedly lower agreement between lay workers as compared to domain experts. Lay annotations have high precision but low recall with respect to expert annotations in the new data that we collected. More generally, we expect lay annotations to be lower quality, which may translate to lower precision, recall, or both, compared to expert annotations. Can one rely solely on lay annotations? Reasonable models can be trained using lay annotations alone, but similar performance can be achieved using markedly less expert data. This suggests that the optimal ratio of expert to crowd annotations for specialized tasks will depend on the cost and availability of domain experts. Expert annotations are preferable whenever its collection is practical. But in real-world settings, a combination of expert and lay annotations is better than using lay data alone. Does it matter what data is annotated by experts? We demonstrate that a system trained on combined data achieves better predictive performance when experts annotate difficult examples rather than instances selected at i.i.d. random. Our contributions in this work are summarized as follows. We define a task difficulty prediction task and show how this is related to, but distinct from, inter-worker agreement. We introduce a new model for difficulty prediction combining learned representations induced via a pre-trained `universal' sentence encoder BIBREF6 , and a sentence encoder learned from scratch for this task. We show that predicting annotation difficulty can be used to improve the task routing and model performance for a biomedical information extraction task. Our results open up a new direction for ensuring corpus quality. We believe that item difficulty prediction will likely be useful in other, non-specialized tasks as well, and that the most effective data collection in specialized domains requires research addressing the fundamental questions we examine here. Related Work Crowdsourcing annotation is now a well-studied problem BIBREF7 , BIBREF0 , BIBREF1 , BIBREF2 . Due to the noise inherent in such annotations, there have also been considerable efforts to develop aggregation models that minimize noise BIBREF0 , BIBREF1 , BIBREF3 , BIBREF4 . There are also several surveys of crowdsourcing in biomedicine specifically BIBREF8 , BIBREF9 , BIBREF10 . Some work in this space has contrasted model performance achieved using expert vs. crowd annotated training data BIBREF11 , BIBREF12 , BIBREF13 . Dumitrache et al. Dumitrache:2018:CGT:3232718.3152889 concluded that performance is similar under these supervision types, finding no clear advantage from using expert annotators. This differs from our findings, perhaps owing to differences in design. The experts we used already hold advanced medical degrees, for instance, while those in prior work were medical students. Furthermore, the task considered here would appear to be of greater difficulty: even a system trained on $\sim $ 5k instances performs reasonably, but far from perfect. By contrast, in some of the prior work where experts and crowd annotations were deemed equivalent, a classifier trained on 300 examples can achieve very high accuracy BIBREF12 . More relevant to this paper, prior work has investigated methods for `task routing' in active learning scenarios in which supervision is provided by heterogeneous labelers with varying levels of expertise BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 . The related question of whether effort is better spent collecting additional annotations for already labeled (but potentially noisily so) examples or novel instances has also been addressed BIBREF18 . What distinguishes the work here is our focus on providing an operational definition of instance difficulty, showing that this can be predicted, and then using this to inform task routing. Application Domain Our specific application concerns annotating abstracts of articles that describe the conduct and results of randomized controlled trials (RCTs). Experimentation in this domain has become easy with the recent release of the EBM-NLP BIBREF5 corpus, which includes a reasonably large training dataset annotated via crowdsourcing, and a modest test set labeled by individuals with advanced medical training. More specifically, the training set comprises 4,741 medical article abstracts with crowdsourced annotations indicating snippets (sequences) that describe the Participants (p), Interventions (i), and Outcome (o) elements of the respective RCT, and the test set is composed of 191 abstracts with p, i, o sequence annotations from three medical experts. Table 1 shows an example of difficult and easy examples according to our definition of difficulty. The underlined text demarcates the (consensus) reference label provided by domain experts. In the difficult examples, crowd workers marked text distinct from these reference annotations; whereas in the easy cases they reproduced them with reasonable fidelity. The difficult sentences usually exhibit complicated structure and feature jargon. An abstract may contain some `easy' and some `difficult' sentences. We thus perform our analysis at the sentence level. We split abstracts into sentences using spaCy. We excluded sentences that comprise fewer than two tokens, as these are likely an artifact of errors in sentence splitting. In total, this resulted in 57,505 and 2,428 sentences in the train and test set abstracts, respectively. Quantifying Task Difficulty The test set includes annotations from both crowd workers and domain experts. We treat the latter as ground truth and then define the difficulty of sentences in terms of the observed agreement between expert and lay annotators. Formally, for annotation task $t$ and instance $i$ : $$\text{Difficulty}_{ti} = \frac{\sum _{j=1}^n{f(\text{label}_{ij}, y_i})}{n}$$ (Eq. 3) where $f$ is a scoring function that measures the quality of the label from worker $j$ for sentence $i$ , as compared to a ground truth annotation, $y_i$ . The difficulty score of sentence $i$ is taken as an average over the scores for all $n$ layworkers. We use Spearmans' correlation coefficient as a scoring function. Specifically, for each sentence we create two vectors comprising counts of how many times each token was annotated by crowd and expert workers, respectively, and calculate the correlation between these. Sentences with no labels are treated as maximally easy; those with only either crowd worker or expert label(s) are assumed maximally difficult. The training set contains only crowdsourced annotations. To label the training data, we use a 10-fold validation like setting. We iteratively retrain the LSTM-CRF-Pattern sequence tagger of Patel et al. patel2018syntactic on 9 folds of the training data and use that trained model to predict labels for the 10th. In this way we obtain predictions on the full training set. We then use predicted spans as proxy `ground truth' annotations to calculate the difficulty score of sentences as described above; we normalize these to the [ $0, 1$ ] interval. We validate this approximation by comparing the proxy scores against reference scores over the test set, the Pearson's correlation coefficients are 0.57 for Population, 0.71 for Intervention and 0.68 for Outcome. There exist many sentences that contain neither manual nor predicted annotations. We treat these as maximally easy sentences (with difficulty scores of 0). Such sentences comprise 51%, 42% and 36% for Population, Interventions and Outcomes data respectively, indicating that it is easier to identify sentences that have no Population spans, but harder to identify sentences that have no Interventions or Outcomes spans. This is intuitive as descriptions of the latter two tend to be more technical and dense with medical jargon. We show the distribution of the automatically labeled scores for sentences that do contain spans in Figure 1 . The mean of the Population (p) sentence scores is significantly lower than that for other types of sentences (i and o), again indicating that they are easier on average to annotate. This aligns with a previous finding that annotating Interventions and Outcomes is more difficult than annotating Participants BIBREF5 . Many sentences contain spans tagged by the LSTM-CRF-Pattern model, but missed by all crowd workers, resulting in a maximally difficult score (1). Inspection of such sentences revealed that some are truly difficult examples, but others are tagging model errors. In either case, such sentences have confused workers and/or the model, and so we retain them all as `difficult' sentences. Content describing the p, i and o, respectively, is quite different. As such, one sentence usually contains (at most) only one of these three content types. We thus treat difficulty prediction for the respective label types as separate tasks. Difficulty is not Worker Agreement Our definition of difficulty is derived from agreement between expert and crowd annotations for the test data, and agreement between a predictive model and crowd annotations in the training data. It is reasonable to ask if these measures are related to inter-annotator agreement, a metric often used in language technology research to identify ambiguous or difficult items. Here we explicitly verify that our definition of difficulty only weakly correlates with inter-annotator agreement. We calculate inter-worker agreement between crowd and expert annotators using Spearman's correlation coefficient. As shown in Table 2 , average agreement between domain experts are considerably higher than agreements between crowd workers for all three label types. This is a clear indication that the crowd annotations are noisier. Furthermore, we compare the correlation between inter-annotator agreement and difficulty scores in the training data. Given that the majority of sentences do not contain a PICO span, we only include in these calculations those that contain a reference label. Pearson's r are 0.34, 0.30 and 0.31 for p, i and o, respectively, confirming that inter-worker agreement and our proposed difficulty score are quite distinct. Predicting Annotation Difficulty We treat difficulty prediction as a regression problem, and propose and evaluate neural model variants for the task. We first train RNN BIBREF19 and CNN BIBREF20 models. We also use the universal sentence encoder (USE) BIBREF6 to induce sentence representations, and train a model using these as features. Following BIBREF6 , we then experiment with an ensemble model that combines the `universal' and task-specific representations to predict annotation difficulty. We expect these universal embeddings to capture general, high-level semantics, and the task specific representations to capture more granular information. Figure 2 depicts the model architecture. Sentences are fed into both the universal sentence encoder and, separately, a task specific neural encoder, yielding two representations. We concatenate these and pass the combined vector to the regression layer. Experimental Setup and Results We trained models for each label type separately. Word embeddings were initialized to 300d GloVe vectors BIBREF21 trained on common crawl data; these are fine-tuned during training. We used the Adam optimizer BIBREF22 with learning rate and decay set to 0.001 and 0.99, respectively. We used batch sizes of 16. We used the large version of the universal sentence encoder with a transformer BIBREF23 . We did not update the pretrained sentence encoder parameters during training. All hyperparamaters for all models (including hidden layers, hidden sizes, and dropout) were tuned using Vizier BIBREF24 via 10-fold cross validation on the training set maximizing for F1. As a baseline, we also trained a linear Support-Vector Regression BIBREF25 model on $n$ -gram features ( $n$ ranges from 1 to 3). Table 3 reports Pearson correlation coefficients between the predictions with each of the neural models and the ground truth difficulty scores. Rows 1-4 correspond to individual models, and row 5 reports the ensemble performance. Columns correspond to label type. Results from all models outperform the baseline SVR model: Pearson's correlation coefficients range from 0.550 to 0.622. The regression correlations are the lowest. The RNN model realizes the strongest performance among the stand-alone (non-ensemble) models, outperforming variants that exploit CNN and USE representations. Combining the RNN and USE further improves results. We hypothesize that this is due to complementary sentence information encoded in universal representations. For all models, correlations for Intervention and Outcomes are higher than for Population, which is expected given the difficulty distributions in Figure 1 . In these, the sentences are more uniformly distributed, with a fair number of difficult and easier sentences. By contrast, in Population there are a greater number of easy sentences and considerably fewer difficult sentences, which makes the difficulty ranking task particularly challenging. Better IE with Difficulty Prediction We next present experiments in which we attempt to use the predicted difficulty during training to improve models for information extraction of descriptions of Population, Interventions and Outcomes from medical article abstracts. We investigate two uses: (1) simply removing the most difficult sentences from the training set, and, (2) re-weighting the most difficult sentences. We again use LSTM-CRF-Pattern as the base model and experimenting on the EBM-NLP corpus BIBREF5 . This is trained on either (1) the training set with difficult sentences removed, or (2) the full training set but with instances re-weighted in proportion to their predicted difficulty score. Following BIBREF5 , we use the Adam optimizer with learning rate of 0.001, decay 0.9, batch size 20 and dropout 0.5. We use pretrained 200d GloVe vectors BIBREF21 to initialize word embeddings, and use 100d hidden char representations. Each word is thus represented with 300 dimensions in total. The hidden size is 100 for the LSTM in the character representation component, and 200 for the LSTM in the information extraction component. We train for 15 epochs, saving parameters that achieve the best F1 score on a nested development set. Removing Difficult Examples We first evaluate changes in performance induced by training the sequence labeling model using less data by removing difficult sentences prior to training. The hypothesis here is that these difficult instances are likely to introduce more noise than signal. We used a cross-fold approach to predict sentence difficulties, training on 9/10ths of the data and scoring the remaining 1/10th at a time. We then sorted sentences by predicted difficulty scores, and experimented with removing increasing numbers of these (in order of difficulty) prior to training the LSTM-CRF-Pattern model. Figure 3 shows the results achieved by the LSTM-CRF-Pattern model after discarding increasing amounts of the training data: the $x$ and $y$ axes correspond to the the percentage of data removed and F1 scores, respectively. We contrast removing sentences predicted to be difficult with removing them (a) randomly (i.i.d.), and, (b) in inverse order of predicted inter-annotator agreement. The agreement prediction model is trained exactly the same like difficult prediction model, with simply changing the difficult score to annotation agreement. F1 scores actually improve (marginally) when we remove the most difficult sentences, up until we drop 4% of the data for Population and Interventions, and 6% for Outcomes. Removing training points at i.i.d. random degrades performance, as expected. Removing sentences in order of disagreement seems to have similar effect as removing them by difficulty score when removing small amount of the data, but the F1 scores drop much faster when removing more data. These findings indicate that sentences predicted to be difficult are indeed noisy, to the extent that they do not seem to provide the model useful signal. Re-weighting by Difficulty We showed above that removing a small number of the most difficult sentences does not harm, and in fact modestly improves, medical IE model performance. However, using the available data we are unable to test if this will be useful in practice, as we would need additional data to determine how many difficult sentences should be dropped. We instead explore an alternative, practical means of exploiting difficulty predictions: we re-weight sentences during training inversely to their predicted difficulty. Formally, we weight sentence $i$ with difficulty scores above $\tau $ according to: $1-a\cdot (d_i-\tau )/(1-\tau )$ , where $d_i$ is the difficulty score for sentence $i$ , and $a$ is a parameter codifying the minimum weight value. We set $\tau $ to 0.8 so as to only re-weight sentences with difficulty in the top 20th percentile, and we set $a$ to 0.5. The re-weighting is equivalent to down-sampling the difficult sentences. LSTM-CRF-Pattern is our base model. Table 4 reports the precision, recall and F1 achieved both with and without sentence re-weighting. Re-weighting improves all metrics modestly but consistently. All F1 differences are statistically significant under a sign test ( $p<0.01$ ). The model with best precision is different for Patient, Intervention and Outcome labels. However re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for i and o, where recall improved by 10 percentage points. This performance increase translated to improvements in F1 across all types, as compared to the base model and to re-weighting by agreement. Involving Expert Annotators The preceding experiments demonstrate that re-weighting difficult sentences annotated by the crowd generally improves the extraction models. Presumably the performance is influenced by the annotation quality. We now examine the possibility that the higher quality and more consistent annotations of domain experts on the difficult instances will benefit the extraction model. This simulates an annotation strategy in which we route difficult instances to domain experts and easier ones to crowd annotators. We also contrast the value of difficult data to that of an i.i.d. random sample of the same size, both annotated by experts. Expert annotations of Random and Difficult Instances We re-annotate by experts a subset of most difficult instances and the same number of random instances. As collecting annotations from experts is slow and expensive, we only re-annotate the difficult instances for the interventions extraction task. We re-annotate the abstracts which cover the sentences with predicted difficulty scores in the top 5 percentile. We rank the abstracts from the training set by the count of difficult sentences, and re-annotate the abstracts that contain the most difficult sentences. Constrained by time and budget, we select only 2000 abstracts for re-annotation; 1000 of these are top-ranked, and 1000 are randomly sampled. This re-annotation cost $3,000. We have released the new annotation data at: https://github.com/bepnye/EBM-NLP. Following BIBREF5 , we recruited five medical experts via Up-work with advanced medical training and strong technical reading/writing skills. The expert annotator were asked to read the entire abstract and highlight, using the BRAT toolkit BIBREF26 , all spans describing medical Interventions. Each abstract is only annotated by one expert. We examined 30 re-annotated abstracts to ensure the annotation quality before hiring the annotator. Table 5 presents the results of LSTM-CRF-Pattern model trained on the reannotated difficult subset and the random subset. The first two rows show the results for models trained with expert annotations. The model trained on random data has a slightly better F1 than that trained on the same amount of difficult data. The model trained on random data has higher precision but lower recall. Rows 3 and 4 list the results for models trained on the same data but with crowd annotation. Models trained with expert-annotated data are clearly superior to those trained with crowd labels with respect to F1, indicating that the experts produced higher quality annotations. For crowdsourced annotations, training the model with data sampled at i.i.d. random achieves 2% higher F1 than when difficult instances are used. When expert annotations are used, this difference is less than 1%. This trend in performance may be explained by differences in annotation quality: the randomly sampled set was more consistently annotated by both experts and crowd because the difficult set is harder. However, in both cases expert annotations are better, with a bigger difference between the expert and crowd models on the difficult set. The last row is the model trained on all 5k abstracts with crowd annotations. Its F1 score is lower than either expert model trained on only 20% of data, suggesting that expert annotations should be collected whenever possible. Again the crowd model on complete data has higher precision than expert models but its recall is much lower. Routing To Experts or Crowd So far a system was trained on one type of data, either labeled by crowd or experts. We now examine the performance of a system trained on data that was routed to either experts or crowd annotators depending on their predicted difficult. Given the results presented so far mixing annotators may be beneficial given their respective trade-offs of precision and recall. We use the annotations from experts for an abstract if it exists otherwise use crowd annotations. The results are presented in Table 6 . Rows 1 and 2 repeat the performance of the models trained on difficult subset and random subset with expert annotations only respectively. The third row is the model trained by combining difficult and random subsets with expert annotations. There are around 250 abstracts in the overlap of these two sets, so there are total 1.75k abstracts used for training the D+R model. Rows 4 to 6 are the models trained on all 5k abstracts with mixed annotations, where Other means the rest of the abstracts with crowd annotation only. The results show adding more training data with crowd annotation still improves at least 1 point F1 score in all three extraction tasks. The improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added. The model trained with re-annotating the difficult subset (D+Other) also outperforms the model with re-annotating the random subset (R+Other) by 2 points in F1. The model trained with re-annotating both of difficult and random subsets (D+R+Other), however, achieves only marginally higher F1 than the model trained with the re-annotated difficult subset (D+Other). In sum, the results clearly indicate that mixing expert and crowd annotations leads to better models than using solely crowd data, and better than using expert data alone. More importantly, there is greater gain in performance when instances are routed according to difficulty, as compared to randomly selecting the data for expert annotators. These findings align with our motivating hypothesis that annotation quality for difficult instances is important for final model performance. They also indicate that mixing annotations from expert and crowd could be an effective way to achieve acceptable model performance given a limited budget. How Many Expert Annotations? We established that crowd annotation are still useful in supplementing expert annotations for medical IE. Obtaining expert annotations for the one thousand most difficult instances greatly improved the model performance. However the choice of how many difficult instances to annotate was an uninformed choice. Here we check if less expert data would have yielded similar gains. Future work will need to address how best to choose this parameter for a routing system. We simulate a routing scenario in which we send consecutive batches of the most difficult examples to the experts for annotation. We track changes in performance as we increase the number of most-difficult-articles sent to domain experts. As shown in Figure 4 , adding expert annotations for difficult articles consistently increases F1 scores. The performance gain is mostly from increased recall; the precision changes only a bit with higher quality annotation. This observation implies that crowd workers often fail to mark target tokens, but do not tend to produce large numbers of false positives. We suspect such failures to identify relevant spans/tokens are due to insufficient domain knowledge possessed by crowd workers. The F1 score achieved after re-annotating the 600 most-difficult articles reaches 68.1%, which is close to the performance when re-annotating 1000 random articles. This demonstrates the effectiveness of recognizing difficult instances. The trend when we use up all expert data is still upward, so adding even more expert data is likely to further improve performance. Unfortunately we exhausted our budget and were not able to obtain additional expert annotations. It is likely that as the size of the expert annotations increases, the value of crowd annotations will diminish. This investigation is left for future work. Conclusions We have introduced the task of predicting annotation difficulty for biomedical information extraction (IE). We trained neural models using different learned representations to score texts in terms of their difficulty. Results from all models were strong with Pearson’s correlation coefficients higher than 0.45 in almost all evaluations, indicating the feasibility of this task. An ensemble model combining universal and task specific feature sentence vectors yielded the best results. Experiments on biomedical IE tasks show that removing up to $\sim $ 10% of the sentences predicted to be most difficult did not decrease model performance, and that re-weighting sentences inversely to their difficulty score during training improves predictive performance. Simulations in which difficult examples are routed to experts and other instances to crowd annotators yields the best results, outperforming the strategy of randomly selecting data for expert annotation, and substantially improving upon the approach of relying exclusively on crowd annotations. In future work, routing strategies based on instance difficulty could be further investigated for budget-quality trade-off. Acknowledgements This work has been partially supported by NSF1748771 grant. Wallace was support in part by NIH/NLM R01LM012086. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How much higher quality is the resulting annotated data? Only give me the answer and do not output any other words.
Answer:
[ "improvement when the difficult subset with expert annotations is mixed with the remaining crowd annotation is 3.5 F1 score, much larger than when a random set of expert annotations are added" ]
276
5,627
single_doc_qa
bc69dc1dea864a47b1d629bdc026c5e5
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Introduction} Spectral line surveys have revealed that high-mass star-forming regions are rich reservoirs of molecules from simple diatomic species to complex and larger molecules (e.g., \citealt{schilke1997b,hatchell1998b,comito2005,bisschop2007}). However, there have been rarely studies undertaken to investigate the chemical evolution during massive star formation from the earliest evolutionary stages, i.e., from High-Mass Starless Cores (HMSCs) and High-Mass Cores with embedded low- to intermediate-mass protostars destined to become massive stars, via High-Mass Protostellar Objects (HMPOs) to the final stars that are able to produce Ultracompact H{\sc ii} regions (UCH{\sc ii}s, see \citealt{beuther2006b} for a recent description of the evolutionary sequence). The first two evolutionary stages are found within so-called Infrared Dark Clouds (IRDCs). While for low-mass stars the chemical evolution from early molecular freeze-out to more evolved protostellar cores is well studied (e.g., \citealt{bergin1997,dutrey1997,pavlyuchenkov2006,joergensen2007}), it is far from clear whether similar evolutionary patterns are present during massive star formation. To better understand the chemical evolution of high-mass star-forming regions we initiated a program to investigate the chemical properties from IRDCs to UCH{\sc ii}s from an observational and theoretical perspective. We start with single-dish line surveys toward a large sample obtaining their basic characteristics, and then perform detailed studies of selected sources using interferometers on smaller scales. These observations are accompanied by theoretical modeling of the chemical processes. Long-term goals are the chemical characterization of the evolutionary sequence in massive star formation, the development of chemical clocks, and the identification of molecules as astrophysical tools to study the physical processes during different evolutionary stages. Here, we present an initial study of the reactive radical ethynyl (C$_2$H) combining single-dish and interferometer observations with chemical modeling. Although C$_2$H was previously observed in low-mass cores and Photon Dominated Regions (e.g., \citealt{millar1984,jansen1995}), so far it was not systematically investigated in the framework of high-mass star formation. \section{Observations} \label{obs} The 21 massive star-forming regions were observed with the Atacama Pathfinder Experiment (APEX) in the 875\,$\mu$m window in fall 2006. We observed 1\,GHz from 338 to 339\,GHz and 1\,GHz in the image sideband from 349 to 350\,GHz. The spectral resolution was 0.1\,km\,s$^{-1}$, but we smoothed the data to $\sim$0.9\,km\,s$^{-1}$. The average system temperatures were around 200\,K, each source had on-source integration times between 5 and 16 min. The data were converted to main-beam temperatures with forward and beam efficiencies of 0.97 and 0.73, respectively \citep{belloche2006}. The average $1\sigma$ rms was 0.4\,K. The main spectral features of interest are the C$_2$H lines around 349.4\,GHz with upper level excitation energies $E_u/k$ of 42\,K (line blends of C$_2$H$(4_{5,5}-3_{4,4})$ \& C$_2$H$(4_{5,4}-3_{4,3})$ at 349.338\,GHz, and C$_2$H$(4_{4,4}-3_{3,3})$ \& C$_2$H$(4_{4,3}-3_{3,2})$ at 349.399\,GHz). The beam size was $\sim 18''$. The original Submillimeter Array (SMA) C$_2$H data toward the HMPO\,18089-1732 were first presented in \citet{beuther2005c}. There we used the compact and extended configurations resulting in good images for all spectral lines except of C$_2$H. For this project, we re-worked on these data only using the compact configuration. Because the C$_2$H emission is distributed on larger scales (see \S\ref{results}), we were now able to derive a C$_2$H image. The integration range was from 32 to 35\,km\,s$^{-1}$, and the achieved $1\sigma$ rms of the C$_2$H image was 450\,mJy\,beam$^{-1}$. For more details on these observations see \citet{beuther2005c}. \section{Results} \label{results} The sources were selected to cover all evolutionary stages from IRDCs via HMPOs to UCH{\sc ii}s. We derived our target list from the samples of \citet{klein2005,fontani2005,hill2005,beltran2006}. Table \ref{sample} lists the observed sources, their coordinates, distances, luminosities and a first order classification into the evolutionary sub-groups IRDCs, HMPOs and UCH{\sc ii}s based on the previously available data. Although this classification is only based on a limited set of data, here we are just interested in general evolutionary trends. Hence, the division into the three main classes is sufficient. Figure \ref{spectra} presents sample spectra toward one source of each evolutionary group. While we see several CH$_3$OH lines as well as SO$_2$ and H$_2$CS toward some of the HMPOs and UCH{\sc ii}s but not toward the IRDCs, the surprising result of this comparison is the presence of the C$_2$H lines around 349.4\,GHz toward all source types from young IRDCs via the HMPOs to evolved UCH{\sc ii}s. Table \ref{sample} lists the peak brightness temperatures, the integrated intensities and the FWHM line-widths of the C$_2$H line blend at 349.399\,GHz. The separation of the two lines of 1.375\,MHz already corresponds to a line-width of 1.2\,km\,s$^{-1}$. We have three C$_2$H non-detections (2 IRDCs and 1 HMPO), however, with no clear trend with respect to the distances or the luminosities (the latter comparison is only possible for the HMPOs). While IRDCs are on average colder than more evolved sources, and have lower brightness temperatures, the non-detections are more probable due to the relatively low sensitivity of the short observations (\S\ref{obs}). Hence, the data indicate that the C$_2$H lines are detected independent of the evolutionary stage of the sources in contrast to the situation with other molecules. When comparing the line-widths between the different sub-groups, one finds only a marginal difference between the IRDCs and the HMPOs (the average $\Delta v$ of the two groups are 2.8 and 3.1\,km\,s$^{-1}$). However, the UCH{\sc ii}s exhibit significantly broader line-widths with an average value of 5.5\,km\,s$^{-1}$. Intrigued by this finding, we wanted to understand the C$_2$H spatial structure during the different evolutionary stages. Therefore, we went back to a dataset obtained with the Submillimeter Array toward the hypercompact H{\sc ii} region IRAS\,18089-1732 with a much higher spatial resolution of $\sim 1''$ \citep{beuther2005c}. Albeit this hypercompact H{\sc ii} region belongs to the class of HMPOs, it is already in a relatively evolved stage and has formed a hot core with a rich molecular spectrum. \citet{beuther2005c} showed the spectral detection of the C$_2$H lines toward this source, but they did not present any spatially resolved images. To recover large-scale structure, we restricted the data to those from the compact SMA configuration (\S\ref{obs}). With this refinement, we were able to produce a spatially resolved C$_2$H map of the line blend at 349.338\,GHz with an angular resolution of $2.9''\times 1.4''$ (corresponding to an average linear resolution of 7700\,AU at the given distance of 3.6\,kpc). Figure \ref{18089} presents the integrated C$_2$H emission with a contour overlay of the 860\,$\mu$m continuum source outlining the position of the massive protostar. In contrast to almost all other molecular lines that peak along with the dust continuum \citep{beuther2005c}, the C$_2$H emission surrounds the continuum peak in a shell-like fashion. \section{Discussion and Conclusions} To understand the observations, we conducted a simple chemical modeling of massive star-forming regions. A 1D cloud model with a mass of 1200\,M$_\sun$, an outer radius of 0.36\,pc and a power-law density profile ($\rho\propto r^p$ with $p=-1.5$) is the initially assumed configuration. Three cases are studied: (1) a cold isothermal cloud with $T=10$\,K, (2) $T=50$\,K, and (3) a warm model with a temperature profile $T\propto r^q$ with $q=-0.4$ and a temperature at the outer radius of 44\,K. The cloud is illuminated by the interstellar UV radiation field (IRSF, \citealt{draine1978}) and by cosmic ray particles (CRP). The ISRF attenuation by single-sized $0.1\mu$m silicate grains at a given radius is calculated in a plane-parallel geometry following \citet{vandishoeck1988}. The CRP ionization rate is assumed to be $1.3\times 10^{-17}$~s$^{-1}$ \citep{spitzer1968}. The gas-grain chemical model by \citet{vasyunin2008} with the desorption energies and surface reactions from \citet{garrod2006} is used. Gas-phase reaction rates are taken from RATE\,06 \citep{woodall2007}, initial abundances, were adopted from the ``low metal'' set of \citet{lee1998}. Figure \ref{model} presents the C$_2$H abundances for the three models at two different time steps: (a) 100\,yr, and (b) in a more evolved stage after $5\times10^4$\,yr. The C$_2$H abundance is high toward the core center right from the beginning of the evolution, similar to previous models (e.g., \citealt{millar1985,herbst1986,turner1999}). During the evolution, the C$_2$H abundance stays approximately constant at the outer core edges, whereas it decreases by more than three orders of magnitude in the center, except for the cold $T=10$~K model. The C$_2$H abundance profiles for all three models show similar behavior. The chemical evolution of ethynyl is determined by relative removal rates of carbon and oxygen atoms or ions into molecules like CO, OH, H$_2$O. Light ionized hydrocarbons CH$^+_{\rm n}$ (n=2..5) are quickly formed by radiative association of C$^+$ with H$_2$ and hydrogen addition reactions: C$^+$ $\rightarrow$ CH$_2^+$ $\rightarrow$ CH$_3^+$ $\rightarrow$ CH$_5^+$. The protonated methane reacts with electrons, CO, C, OH, and more complex species at later stage and forms methane. The CH$_4$ molecules undergo reactive collisions with C$^+$, producing C$_2$H$_2^+$ and C$_2$H$_3^+$. An alternative way to produce C$_2$H$_2^+$ is the dissociative recombination of CH$_5^+$ into CH$_3$ followed by reactions with C$^+$. Finally, C$_2$H$_2^+$ and C$_2$H$_3^+$ dissociatively recombine into CH, C$_2$H, and C$_2$H$_2$. The major removal for C$_2$H is either the direct neutral-neutral reaction with O that forms CO, or the same reaction but with heavier carbon chain ions that are formed from C$_2$H by subsequent insertion of carbon. At later times, depletion and gas-phase reactions with more complex species may enter into this cycle. At the cloud edge the interstellar UV radiation instantaneously dissociates CO despite its self-shielding, re-enriching the gas with elemental carbon. The transformation of C$_2$H into CO and other species proceeds efficiently in dense regions, in particular in the ``warm'' model where endothermic reactions result in rich molecular complexity of the gas (see Fig.~\ref{model}). In contrast, in the ``cold'' 10\,K model gas-grain interactions and surface reactions become important. As a result, a large fraction of oxygen is locked in water ice that is hard to desorb ($E_{\rm des} \sim 5500$~K), while half of the elemental carbon goes to volatile methane ice ($E_{\rm des} \sim 1300$~K). Upon CRP heating of dust grains, this leads to much higher gas-phase abundance of C$_2$H in the cloud core for the cold model compared to the warm model. The effect is not that strong for less dense regions at larger radii from the center. Since the C$_2$H emission is anti-correlated with the dust continuum emission in the case of IRAS\,18089-1732 (Fig.\,\ref{18089}), we do not have the H$_2$ column densities to quantitatively compare the abundance profiles of IRAS\,18089-1732 with our model. However, data and model allow a qualitative comparison of the spatial structures. Estimating an exact evolutionary time for IRAS\,18089-1732 is hardly possible, but based on the strong molecular line emission, its high central gas temperatures and the observed outflow-disk system \citep{beuther2004a,beuther2004b,beuther2005c}, an approximate age of $5\times10^4$\,yr appears reasonable. Although dynamical and chemical times are not necessarily exactly the same, in high-mass star formation they should not differ to much: Following the models by \citet{mckee2003} or \citet{krumholz2006b}, the luminosity rises strongly right from the onset of collapse which can be considered as a starting point for the chemical evolution. At the same time disks and outflows evolve, which should hence have similar time-scales. The diameter of the shell-like C$_2$H structure in IRAS\,18089-1732 is $\sim 5''$ (Fig.\,\ref{18089}), or $\sim$9000\,AU in radius at the given distance of 3.6\,kpc. This value is well matched by the modeled region with decreased C$_2$H abundance (Fig.\,\ref{model}). Although in principle optical depths and/or excitation effects could mimic the C$_2$H morphology, we consider this as unlikely because the other observed molecules with many different transitions all peak toward the central submm continuum emission in IRAS\,18089-1732 \citep{beuther2005c}. Since C$_2$H is the only exception in that rich dataset, chemical effects appear the more plausible explanation. The fact that we see C$_2$H at the earliest and the later evolutionary stages can be explained by the reactive nature of C$_2$H: it is produced quickly early on and gets replenished at the core edges by the UV photodissociation of CO. The inner ``chemical'' hole observed toward IRAS\,18089-1732 can be explained by C$_2$H being consumed in the chemical network forming CO and more complex molecules like larger carbon-hydrogen complexes and/or depletion. The data show that C$_2$H is not suited to investigate the central gas cores in more evolved sources, however, our analysis indicates that C$_2$H may be a suitable tracer of the earliest stages of (massive) star formation, like N$_2$H$^+$ or NH$_3$ (e.g., \citealt{bergin2002,tafalla2004,beuther2005a,pillai2006}). While a spatial analysis of the line emission will give insights into the kinematics of the gas and also the evolutionary stage from chemical models, multiple C$_2$H lines will even allow a temperature characterization. With its lowest $J=1-0$ transitions around 87\,GHz, C$_2$H has easily accessible spectral lines in several bands between the 3\,mm and 850\,$\mu$m. Furthermore, even the 349\,GHz lines presented here have still relatively low upper level excitation energies ($E_u/k\sim42$\,K), hence allowing to study cold cores even at sub-millimeter wavelengths. This prediction can further be proved via high spectral and spatial resolution observations of different C$_2$H lines toward young IRDCs. \acknowledgments{H.B. acknowledges financial support by the Emmy-Noether-Programm of the Deutsche Forschungsgemeinschaft (DFG, grant BE2578). } Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What molecule was the focus of the study? Only give me the answer and do not output any other words.
Answer:
[ "The focus of the study was on the reactive radical ethynyl (C$_2$H)." ]
276
4,116
single_doc_qa
29e26eefa54f4b08a01cea32d96a91d4
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Ancient Chinese is the writing language in ancient China. It is a treasure of Chinese culture which brings together the wisdom and ideas of the Chinese nation and chronicles the ancient cultural heritage of China. Learning ancient Chinese not only helps people to understand and inherit the wisdom of the ancients, but also promotes people to absorb and develop Chinese culture. However, it is difficult for modern people to read ancient Chinese. Firstly, compared with modern Chinese, ancient Chinese is more concise and shorter. The grammatical order of modern Chinese is also quite different from that of ancient Chinese. Secondly, most modern Chinese words are double syllables, while the most of the ancient Chinese words are monosyllabic. Thirdly, there is more than one polysemous phenomenon in ancient Chinese. In addition, manual translation has a high cost. Therefore, it is meaningful and useful to study the automatic translation from ancient Chinese to modern Chinese. Through ancient-modern Chinese translation, the wisdom, talent and accumulated experience of the predecessors can be passed on to more people. Neural machine translation (NMT) BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 has achieved remarkable performance on many bilingual translation tasks. It is an end-to-end learning approach for machine translation, with the potential to show great advantages over the statistic machine translation (SMT) systems. However, NMT approach has not been widely applied to the ancient-modern Chinese translation task. One of the main reasons is the limited high-quality parallel data resource. The most popular method of acquiring translation examples is bilingual text alignment BIBREF5 . This kind of method can be classified into two types: lexical-based and statistical-based. The lexical-based approaches BIBREF6 , BIBREF7 focus on lexical information, which utilize the bilingual dictionary BIBREF8 , BIBREF9 or lexical features. Meanwhile, the statistical-based approaches BIBREF10 , BIBREF11 rely on statistical information, such as sentence length ratio in two languages and align mode probability. However, these methods are designed for other bilingual language pairs that are written in different language characters (e.g. English-French, Chinese-Japanese). The ancient-modern Chinese has some characteristics that are quite different from other language pairs. For example, ancient and modern Chinese are both written in Chinese characters, but ancient Chinese is highly concise and its syntactical structure is different from modern Chinese. The traditional methods do not take these characteristics into account. In this paper, we propose an effective ancient-modern Chinese text alignment method at the level of clause based on the characteristics of these two languages. The proposed method combines both lexical-based information and statistical-based information, which achieves 94.2 F1-score on Test set. Recently, a simple longest common subsequence based approach for ancient-modern Chinese sentence alignment is proposed in BIBREF12 . Our experiments showed that our proposed alignment approach performs much better than their method. We apply the proposed method to create a large translation parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. Furthermore, we test SMT models and various NMT models on the created dataset and provide a strong baseline for this task. Overview There are four steps to build the ancient-modern Chinese translation dataset: (i) The parallel corpus crawling and cleaning. (ii) The paragraph alignment. (iii) The clause alignment based on aligned paragraphs. (iv) Augmenting data by merging aligned adjacent clauses. The most critical step is the third step. Clause Alignment In the clause alignment step, we combine both statistical-based and lexical-based information to measure the score for each possible clause alignment between ancient and modern Chinese strings. The dynamic programming is employed to further find overall optimal alignment paragraph by paragraph. According to the characteristics of the ancient and modern Chinese languages, we consider the following factors to measure the alignment score INLINEFORM0 between a bilingual clause pair: Lexical Matching. The lexical matching score is used to calculate the matching coverage of the ancient clause INLINEFORM0 . It contains two parts: exact matching and dictionary matching. An ancient Chinese character usually corresponds to one or more modern Chinese words. In the first part, we carry out Chinese Word segmentation to the modern Chinese clause INLINEFORM1 . Then we match the ancient characters and modern words in the order from left to right. In further matching, the words that have been matched will be deleted from the original clauses. However, some ancient characters do not appear in its corresponding modern Chinese words. An ancient Chinese dictionary is employed to address this issue. We preprocess the ancient Chinese dictionary and remove the stop words. In this dictionary matching step, we retrieve the dictionary definition of each unmatched ancient character and use it to match the remaining modern Chinese words. To reduce the impact of universal word matching, we use Inverse Document Frequency (IDF) to weight the matching words. The lexical matching score is calculated as: DISPLAYFORM0 The above equation is used to calculate the matching coverage of the ancient clause INLINEFORM0 . The first term of equation ( EQREF8 ) represents exact matching score. INLINEFORM1 denotes the length of INLINEFORM2 , INLINEFORM3 denotes each ancient character in INLINEFORM4 , and the indicator function INLINEFORM5 indicates whether the character INLINEFORM6 can match the words in the clause INLINEFORM7 . The second term is dictionary matching score. Here INLINEFORM8 and INLINEFORM9 represent the remaining unmatched strings of INLINEFORM10 and INLINEFORM11 , respectively. INLINEFORM12 denotes the INLINEFORM13 -th character in the dictionary definition of the INLINEFORM14 and its IDF score is denoted as INLINEFORM15 . The INLINEFORM16 is a predefined parameter which is used to normalize the IDF score. We tuned the value of this parameter on the Dev set. Statistical Information. Similar to BIBREF11 and BIBREF6 , the statistical information contains alignment mode and length information. There are many alignment modes between ancient and modern Chinese languages. If one ancient Chinese clause aligns two adjacent modern Chinese clauses, we call this alignment as 1-2 alignment mode. We show some examples of different alignment modes in Figure FIGREF9 . In this paper, we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes which account for INLINEFORM0 of the Dev set. We estimate the probability Pr INLINEFORM1 n-m INLINEFORM2 of each alignment mode n-m on the Dev set. To utilize length information, we make an investigation on length correlation between these two languages. Based on the assumption of BIBREF11 that each character in one language gives rise to a random number of characters in the other language and those random variables INLINEFORM3 are independent and identically distributed with a normal distribution, we estimate the mean INLINEFORM4 and standard deviation INLINEFORM5 from the paragraph aligned parallel corpus. Given a clause pair INLINEFORM6 , the statistical information score can be calculated by: DISPLAYFORM0 where INLINEFORM0 denotes the normal distribution probability density function. Edit Distance. Because ancient and modern Chinese are both written in Chinese characters, we also consider using the edit distance. It is a way of quantifying the dissimilarity between two strings by counting the minimum number of operations (insertion, deletion, and substitution) required to transform one string into the other. Here we define the edit distance score as: DISPLAYFORM0 Dynamic Programming. The overall alignment score for each possible clause alignment is as follows: DISPLAYFORM0 Here INLINEFORM0 and INLINEFORM1 are pre-defined interpolation factors. We use dynamic programming to find the overall optimal alignment paragraph by paragraph. Let INLINEFORM2 be total alignment scores of aligning the first to INLINEFORM3 -th ancient Chinese clauses with the first to to INLINEFORM4 -th modern Chinese clauses, and the recurrence then can be described as follows: DISPLAYFORM0 Where INLINEFORM0 denotes concatenate clause INLINEFORM1 to clause INLINEFORM2 . As we discussed above, here we only consider 1-0, 0-1, 1-1, 1-2, 2-1 and 2-2 alignment modes. Ancient-Modern Chinese Dataset Data Collection. To build the large ancient-modern Chinese dataset, we collected 1.7K bilingual ancient-modern Chinese articles from the internet. More specifically, a large part of the ancient Chinese data we used come from ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era. They used plain and accurate words to express what happened at that time, and thus ensure the generality of the translated materials. Paragraph Alignment. To further ensure the quality of the new dataset, the work of paragraph alignment is manually completed. After data cleaning and manual paragraph alignment, we obtained 35K aligned bilingual paragraphs. Clause Alignment. We applied our clause alignment algorithm on the 35K aligned bilingual paragraphs and obtained 517K aligned bilingual clauses. The reason we use clause alignment algorithm instead of sentence alignment is because we can construct more aligned sentences more flexibly and conveniently. To be specific, we can get multiple additional sentence level bilingual pairs by “data augmentation”. Data Augmentation. We augmented the data in the following way: Given an aligned clause pair, we merged its adjacent clause pairs as a new sample pair. For example, suppose we have three adjacent clause level bilingual pairs: ( INLINEFORM0 , INLINEFORM1 ), ( INLINEFORM2 , INLINEFORM3 ), and ( INLINEFORM4 , INLINEFORM5 ). We can get some additional sentence level bilingual pairs, such as: ( INLINEFORM6 , INLINEFORM7 ) and ( INLINEFORM8 , INLINEFORM9 ). Here INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 are adjacent clauses in the original paragraph, and INLINEFORM13 denotes concatenate clause INLINEFORM14 to clause INLINEFORM15 . The advantage of using this data augmentation method is that compared with only using ( INLINEFORM16 , INLINEFORM17 ) as the training data, we can also use ( INLINEFORM18 , INLINEFORM19 ) and ( INLINEFORM20 , INLINEFORM21 ) as the training data, which can provide richer supervision information for the model and make the model learn the align information between the source language and the target language better. After the data augmentation, we filtered the sentences which are longer than 50 or contain more than four clause pairs. Dataset Creation. Finally, we split the dataset into three sets: training (Train), development (Dev) and testing (Test). Note that the unaugmented dataset contains 517K aligned bilingual clause pairs from 35K aligned bilingual paragraphs. To keep all the sentences in different sets come from different articles, we split the 35K aligned bilingual paragraphs into Train, Dev and Test sets following these ratios respectively: 80%, 10%, 10%. Before data augmentation, the unaugmented Train set contains INLINEFORM0 aligned bilingual clause pairs from 28K aligned bilingual paragraphs. Then we augmented the Train, Dev and Test sets respectively. Note that the augmented Train, Dev and Test sets also contain the unaugmented data. The statistical information of the three data sets is shown in Table TABREF17 . We show some examples of data in Figure FIGREF14 . RNN-based NMT model We first briefly introduce the RNN based Neural Machine Translation (RNN-based NMT) model. The RNN-based NMT with attention mechanism BIBREF0 has achieved remarkable performance on many translation tasks. It consists of encoder and decoder part. We firstly introduce the encoder part. The input word sequence of source language are individually mapped into a INLINEFORM0 -dimensional vector space INLINEFORM1 . Then a bi-directional RNN BIBREF15 with GRU BIBREF16 or LSTM BIBREF17 cell converts these vectors into a sequences of hidden states INLINEFORM2 . For the decoder part, another RNN is used to generate target sequence INLINEFORM0 . The attention mechanism BIBREF0 , BIBREF18 is employed to allow the decoder to refer back to the hidden state sequence and focus on a particular segment. The INLINEFORM1 -th hidden state INLINEFORM2 of decoder part is calculated as: DISPLAYFORM0 Here g INLINEFORM0 is a linear combination of attended context vector c INLINEFORM1 and INLINEFORM2 is the word embedding of (i-1)-th target word: DISPLAYFORM0 The attended context vector c INLINEFORM0 is computed as a weighted sum of the hidden states of the encoder: DISPLAYFORM0 The probability distribution vector of the next word INLINEFORM0 is generated according to the following: DISPLAYFORM0 We take this model as the basic RNN-based NMT model in the following experiments. Transformer-NMT Recently, the Transformer model BIBREF4 has made remarkable progress in machine translation. This model contains a multi-head self-attention encoder and a multi-head self-attention decoder. As proposed by BIBREF4 , an attention function maps a query and a set of key-value pairs to an output, where the queries INLINEFORM0 , keys INLINEFORM1 , and values INLINEFORM2 are all vectors. The input consists of queries and keys of dimension INLINEFORM3 , and values of dimension INLINEFORM4 . The attention function is given by: DISPLAYFORM0 Multi-head attention mechanism projects queries, keys and values to INLINEFORM0 different representation subspaces and calculates corresponding attention. The attention function outputs are concatenated and projected again before giving the final output. Multi-head attention allows the model to attend to multiple features at different positions. The encoder is composed of a stack of INLINEFORM0 identical layers. Each layer has two sub-layers: multi-head self-attention mechanism and position-wise fully connected feed-forward network. Similarly, the decoder is also composed of a stack of INLINEFORM1 identical layers. In addition to the two sub-layers in each encoder layer, the decoder contains a third sub-layer which performs multi-head attention over the output of the encoder stack (see more details in BIBREF4 ). Experiments Our experiments revolve around the following questions: Q1: As we consider three factors for clause alignment, do all these factors help? How does our method compare with previous methods? Q2: How does the NMT and SMT models perform on this new dataset we build? Clause Alignment Results (Q1) In order to evaluate our clause alignment algorithm, we manually aligned bilingual clauses from 37 bilingual ancient-modern Chinese articles, and finally got 4K aligned bilingual clauses as the Test set and 2K clauses as the Dev set. Metrics. We used F1-score and precision score as the evaluation metrics. Suppose that we get INLINEFORM0 bilingual clause pairs after running the algorithm on the Test set, and there are INLINEFORM1 bilingual clause pairs of these INLINEFORM2 pairs are in the ground truth of the Test set, the precision score is defined as INLINEFORM3 (the algorithm gives INLINEFORM4 outputs, INLINEFORM5 of which are correct). And suppose that the ground truth of the Test set contains INLINEFORM6 bilingual clause pairs, the recall score is INLINEFORM7 (there are INLINEFORM8 ground truth samples, INLINEFORM9 of which are output by the algorithm), then the F1-score is INLINEFORM10 . Baselines. Since the related work BIBREF10 , BIBREF11 can be seen as the ablation cases of our method (only statistical score INLINEFORM0 with dynamic programming), we compared the full proposed method with its variants on the Test set for ablation study. In addition, we also compared our method with the longest common subsequence (LCS) based approach proposed by BIBREF12 . To the best of our knowledge, BIBREF12 is the latest related work which are designed for Ancient-Modern Chinese alignment. Hyper-parameters. For the proposed method, we estimated INLINEFORM0 and INLINEFORM1 on all aligned paragraphs. The probability Pr INLINEFORM2 n-m INLINEFORM3 of each alignment mode n-m was estimated on the Dev set. For the hyper-parameters INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , the grid search was applied to tune them on the Dev set. In order to show the effect of hyper-parameters INLINEFORM7 , INLINEFORM8 , and INLINEFORM9 , we reported the results of various hyper-parameters on the Dev set in Table TABREF26 . Based on the results of grid search on the Dev set, we set INLINEFORM10 , INLINEFORM11 , and INLINEFORM12 in the following experiment. The Jieba Chinese text segmentation is employed for modern Chinese word segmentation. Results. The results on the Test set are shown in Table TABREF28 , the abbreviation w/o means removing a particular part from the setting. From the results, we can see that the lexical matching score is the most important among these three factors, and statistical information score is more important than edit distance score. Moreover, the dictionary term in lexical matching score significantly improves the performance. From these results, we obtain the best setting that involves all these three factors. We used this setting for dataset creation. Furthermore, the proposed method performs much better than LCS BIBREF12 . Translation Results (Q2) In this experiment, we analyzed and compared the performance of the SMT and various NMT models on our built dataset. To verify the effectiveness of our data augmented method. We trained the NMT and SMT models on both unaugmented dataset (including 0.46M training pairs) and augmented dataset, and test all the models on the same Test set which is augmented. The models to be tested and their configurations are as follows: SMT. The state-of-art Moses toolkit BIBREF19 was used to train SMT model. We used KenLM BIBREF20 to train a 5-gram language model, and the GIZA++ toolkit to align the data. RNN-based NMT. The basic RNN-based NMT model is based on BIBREF0 which is introduced above. Both the encoder and decoder used 2-layer RNN with 1024 LSTM cells, and the encoder is a bi-directional RNN. The batch size, threshold of element-wise gradient clipping and initial learning rate of Adam optimizer BIBREF21 were set to 128, 5.0 and 0.001. When trained the model on augmented dataset, we used 4-layer RNN. Several techniques were investigated to train the model, including layer-normalization BIBREF22 , RNN-dropout BIBREF23 , and learning rate decay BIBREF1 . The hyper-parameters were chosen empirically and adjusted in the Dev set. Furthermore, we tested the basic NMT model with several techniques, such as target language reversal BIBREF24 (reversing the order of the words in all target sentences, but not source sentences), residual connection BIBREF25 and pre-trained word2vec BIBREF26 . For word embedding pre-training, we collected an external ancient corpus which contains INLINEFORM0 134M tokens. Transformer-NMT. We also trained the Transformer model BIBREF4 which is a strong baseline of NMT on both augmented and unaugmented parallel corpus. The training configuration of the Transformer model is shown in Table TABREF32 . The hyper-parameters are set based on the settings in the paper BIBREF4 and the sizes of our training sets. For the evaluation, we used the average of 1 to 4 gram BLEUs multiplied by a brevity penalty BIBREF27 which computed by multi-bleu.perl in Moses as metrics. The results are reported in Table TABREF34 . For RNN-based NMT, we can see that target language reversal, residual connection, and word2vec can further improve the performance of the basic RNN-based NMT model. However, we find that word2vec and reversal tricks seem no obvious improvement when trained the RNN-based NMT and Transformer models on augmented parallel corpus. For SMT, it performs better than NMT models when they were trained on the unaugmented dataset. Nevertheless, when trained on the augmented dataset, both the RNN-based NMT model and Transformer based NMT model outperform the SMT model. In addition, as with other translation tasks BIBREF4 , the Transformer also performs better than RNN-based NMT. Because the Test set contains both augmented and unaugmented data, it is not surprising that the RNN-based NMT model and Transformer based NMT model trained on unaugmented data would perform poorly. In order to further verify the effect of data augmentation, we report the test results of the models on only unaugmented test data (including 48K test pairs) in Table TABREF35 . From the results, it can be seen that the data augmentation can still improve the models. Analysis The generated samples of various models are shown in Figure FIGREF36 . Besides BLEU scores, we analyze these examples from a human perspective and draw some conclusions. At the same time, we design different metrics and evaluate on the whole Test set to support our conclusions as follows: On the one hand, we further compare the translation results from the perspective of people. We find that although the original meaning can be basically translated by SMT, its translation results are less smooth when compared with the other two NMT models (RNN-based NMT and Transformer). For example, the translations of SMT are usually lack of auxiliary words, conjunctions and function words, which is not consistent with human translation habits. To further confirm this conclusion, the average length of the translation results of the three models are measured (RNN-based NMT:17.12, SMT:15.50, Transformer:16.78, Reference:16.47). We can see that the average length of the SMT outputs is shortest, and the length gaps between the SMT outputs and the references are largest. Meanwhile, the average length of the sentences translated by Transformer is closest to the average length of references. These results indirectly verify our point of view, and show that the NMT models perform better than SMT in this task. On the other hand, there still exists some problems to be solved. We observe that translating proper nouns and personal pronouns (such as names, place names and ancient-specific appellations) is very difficult for all of these models. For instance, the ancient Chinese appellation `Zhen' should be translated into `Wo' in modern Chinese. Unfortunately, we calculate the accurate rate of some special words (such as `Zhen',`Chen' and `Gua'), and find that this rate is very low (the accurate rate of translating `Zhen' are: RNN-based NMT:0.14, SMT:0.16, Transformer:0.05). We will focus on this issue in the future. Conclusion and Future Work We propose an effective ancient-modern Chinese clause alignment method which achieves 94.2 F1-score on Test set. Based on it, we build a large scale parallel corpus which contains INLINEFORM0 1.24M bilingual sentence pairs. To our best knowledge, this is the first large high-quality ancient-modern Chinese dataset. In addition, we test the performance of the SMT and various NMT models on our built dataset and provide a strong NMT baseline for this task which achieves 27.16 BLEU score (4-gram). We further analyze the performance of the SMT and various NMT models and summarize some specific problems that machine translation models will encounter when translating ancient Chinese. For the future work, firstly, we are going to expand the dataset using the proposed method continually. Secondly, we will focus on solving the problem of proper noun translation and improve the translation system according to the features of ancient Chinese translation. Finally, we plan to introduce some techniques of statistical translation into neural machine translation to improve the performance. This work is supported by National Natural Science Fund for Distinguished Young Scholar (Grant No. 61625204) and partially supported by the State Key Program of National Science Foundation of China (Grant Nos. 61836006 and 61432014). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Where does the ancient Chinese dataset come from? Only give me the answer and do not output any other words.
Answer:
[ "ancient Chinese history records in several dynasties (about 1000BC-200BC) and articles written by celebrities of that era", "Ancient Chinese history records in several dynasties and articles written by celebrities during 1000BC-200BC collected from the internet " ]
276
4,980
single_doc_qa
87d40b46e5d746f1a59fb1b9b3597422
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction There exists a class of language construction known as pun in natural language texts and utterances, where a certain word or other lexical items are used to exploit two or more separate meanings. It has been shown that understanding of puns is an important research question with various real-world applications, such as human-computer interaction BIBREF0 , BIBREF1 and machine translation BIBREF2 . Recently, many researchers show their interests in studying puns, like detecting pun sentences BIBREF3 , locating puns in the text BIBREF4 , interpreting pun sentences BIBREF5 and generating sentences containing puns BIBREF6 , BIBREF7 , BIBREF8 . A pun is a wordplay in which a certain word suggests two or more meanings by exploiting polysemy, homonymy, or phonological similarity to another sign, for an intended humorous or rhetorical effect. Puns can be generally categorized into two groups, namely heterographic puns (where the pun and its latent target are phonologically similar) and homographic puns (where the two meanings of the pun reflect its two distinct senses) BIBREF9 . Consider the following two examples: The first punning joke exploits the sound similarity between the word “propane" and the latent target “profane", which can be categorized into the group of heterographic puns. Another categorization of English puns is homographic pun, exemplified by the second instance leveraging distinct senses of the word “gut". Pun detection is the task of detecting whether there is a pun residing in the given text. The goal of pun location is to find the exact word appearing in the text that implies more than one meanings. Most previous work addresses such two tasks separately and develop separate systems BIBREF10 , BIBREF5 . Typically, a system for pun detection is built to make a binary prediction on whether a sentence contains a pun or not, where all instances (with or without puns) are taken into account during training. For the task of pun location, a separate system is used to make a single prediction as to which word in the given sentence in the text that trigger more than one semantic interpretations of the text, where the training data involves only sentences that contain a pun. Therefore, if one is interested in solving both problems at the same time, a pipeline approach that performs pun detection followed by pun location can be used. Compared to the pipeline methods, joint learning has been shown effective BIBREF11 , BIBREF12 since it is able to reduce error propagation and allows information exchange between tasks which is potentially beneficial to all the tasks. In this work, we demonstrate that the detection and location of puns can be jointly addressed by a single model. The pun detection and location tasks can be combined as a sequence labeling problem, which allows us to jointly detect and locate a pun in a sentence by assigning each word a tag. Since each context contains a maximum of one pun BIBREF9 , we design a novel tagging scheme to capture this structural constraint. Statistics on the corpora also show that a pun tends to appear in the second half of a context. To capture such a structural property, we also incorporate word position knowledge into our structured prediction model. Experiments on the benchmark datasets show that detection and location tasks can reinforce each other, leading to new state-of-the-art performance on these two tasks. To the best of our knowledge, this is the first work that performs joint detection and location of English puns by using a sequence labeling approach. Problem Definition We first design a simple tagging scheme consisting of two tags { INLINEFORM0 }: INLINEFORM0 tag means the current word is not a pun. INLINEFORM0 tag means the current word is a pun. If the tag sequence of a sentence contains a INLINEFORM0 tag, then the text contains a pun and the word corresponding to INLINEFORM1 is the pun. The contexts have the characteristic that each context contains a maximum of one pun BIBREF9 . In other words, there exists only one pun if the given sentence is detected as the one containing a pun. Otherwise, there is no pun residing in the text. To capture this interesting property, we propose a new tagging scheme consisting of three tags, namely { INLINEFORM0 }. INLINEFORM0 tag indicates that the current word appears before the pun in the given context. INLINEFORM0 tag highlights the current word is a pun. INLINEFORM0 tag indicates that the current word appears after the pun. We empirically show that the INLINEFORM0 scheme can guarantee the context property that there exists a maximum of one pun residing in the text. Given a context from the training set, we will be able to generate its corresponding gold tag sequence using a deterministic procedure. Under the two schemes, if a sentence does not contain any puns, all words will be tagged with INLINEFORM0 or INLINEFORM1 , respectively. Exemplified by the second sentence “Some diets cause a gut reaction," the pun is given as “gut." Thus, under the INLINEFORM2 scheme, it should be tagged with INLINEFORM3 , while the words before it are assigned with the tag INLINEFORM4 and words after it are with INLINEFORM5 , as illustrated in Figure FIGREF8 . Likewise, the INLINEFORM6 scheme tags the word “gut" with INLINEFORM7 , while other words are tagged with INLINEFORM8 . Therefore, we can combine the pun detection and location tasks into one problem which can be solved by the sequence labeling approach. Model Neural models have shown their effectiveness on sequence labeling tasks BIBREF13 , BIBREF14 , BIBREF15 . In this work, we adopt the bidirectional Long Short Term Memory (BiLSTM) BIBREF16 networks on top of the Conditional Random Fields BIBREF17 (CRF) architecture to make labeling decisions, which is one of the classical models for sequence labeling. Our model architecture is illustrated in Figure FIGREF8 with a running example. Given a context/sentence INLINEFORM0 where INLINEFORM1 is the length of the context, we generate the corresponding tag sequence INLINEFORM2 based on our designed tagging schemes and the original annotations for pun detection and location provided by the corpora. Our model is then trained on pairs of INLINEFORM3 . Input. The contexts in the pun corpus hold the property that each pun contains exactly one content word, which can be either a noun, a verb, an adjective, or an adverb. To capture this characteristic, we consider lexical features at the character level. Similar to the work of BIBREF15 , the character embeddings are trained by the character-level LSTM networks on the unannotated input sequences. Nonlinear transformations are then applied to the character embeddings by highway networks BIBREF18 , which map the character-level features into different semantic spaces. We also observe that a pun tends to appear at the end of a sentence. Specifically, based on the statistics, we found that sentences with a pun that locate at the second half of the text account for around 88% and 92% in homographic and heterographic datasets, respectively. We thus introduce a binary feature that indicates if a word is located at the first or the second half of an input sentence to capture such positional information. A binary indicator can be mapped to a vector representation using a randomly initialized embedding table BIBREF19 , BIBREF20 . In this work, we directly adopt the value of the binary indicator as part of the input. The concatenation of the transformed character embeddings, the pre-trained word embeddings BIBREF21 , and the position indicators are taken as input of our model. Tagging. The input is then fed into a BiLSTM network, which will be able to capture contextual information. For a training instance INLINEFORM0 , we suppose the output by the word-level BiLSTM is INLINEFORM1 . The CRF layer is adopted to capture label dependencies and make final tagging decisions at each position, which has been included in many state-of-the-art sequence labeling models BIBREF14 , BIBREF15 . The conditional probability is defined as: where INLINEFORM0 is a set of all possible label sequences consisting of tags from INLINEFORM1 (or INLINEFORM2 ), INLINEFORM3 and INLINEFORM4 are weight and bias parameters corresponding to the label pair INLINEFORM5 . During training, we minimize the negative log-likelihood summed over all training instances: where INLINEFORM0 refers to the INLINEFORM1 -th instance in the training set. During testing, we aim to find the optimal label sequence for a new input INLINEFORM2 : This search process can be done efficiently using the Viterbi algorithm. Datasets and Settings We evaluate our model on two benchmark datasets BIBREF9 . The homographic dataset contains 2,250 contexts, 1,607 of which contain a pun. The heterographic dataset consists of 1,780 contexts with 1,271 containing a pun. We notice there is no standard splitting information provided for both datasets. Thus we apply 10-fold cross validation. To make direct comparisons with prior studies, following BIBREF4 , we accumulated the predictions for all ten folds and calculate the scores in the end. For each fold, we randomly select 10% of the instances from the training set for development. Word embeddings are initialized with the 100-dimensional Glove BIBREF21 . The dimension of character embeddings is 30 and they are randomly initialized, which can be fine tuned during training. The pre-trained word embeddings are not updated during training. The dimensions of hidden vectors for both char-level and word-level LSTM units are set to 300. We adopt stochastic gradient descent (SGD) BIBREF26 with a learning rate of 0.015. For the pun detection task, if the predicted tag sequence contains at least one INLINEFORM0 tag, we regard the output (i.e., the prediction of our pun detection model) for this task as true, otherwise false. For the pun location task, a predicted pun is regarded as correct if and only if it is labeled as the gold pun in the dataset. As to pun location, to make fair comparisons with prior studies, we only consider the instances that are labeled as the ones containing a pun. We report precision, recall and INLINEFORM1 score in Table TABREF11 . A list of prior works that did not employ joint learning are also shown in the first block of Table TABREF11 . Results We also implemented a baseline model based on conditional random fields (CRF), where features like POS tags produced by the Stanford POS tagger BIBREF27 , n-grams, label transitions, word suffixes and relative position to the end of the text are considered. We can see that our model with the INLINEFORM0 tagging scheme yields new state-of-the-art INLINEFORM1 scores on pun detection and competitive results on pun location, compared to baselines that do not adopt joint learning in the first block. For location on heterographic puns, our model's performance is slightly lower than the system of BIBREF25 , which is a rule-based locator. Compared to CRF, we can see that our model, either with the INLINEFORM2 or the INLINEFORM3 scheme, yields significantly higher recall on both detection and location tasks, while the precisions are relatively close. This demonstrates the effectiveness of BiLSTM, which learns the contextual features of given texts – such information appears to be helpful in recalling more puns. Compared to the INLINEFORM0 scheme, the INLINEFORM1 tagging scheme is able to yield better performance on these two tasks. After studying outputs from these two approaches, we found that one leading source of error for the INLINEFORM2 approach is that there exist more than one words in a single instance that are assigned with the INLINEFORM3 tag. However, according to the description of pun in BIBREF9 , each context contains a maximum of one pun. Thus, such a useful structural constraint is not well captured by the simple approach based on the INLINEFORM4 tagging scheme. On the other hand, by applying the INLINEFORM5 tagging scheme, such a constraint is properly captured in the model. As a result, the results for such a approach are significantly better than the approach based on the INLINEFORM6 tagging scheme, as we can observe from the table. Under the same experimental setup, we also attempted to exclude word position features. Results are given by INLINEFORM7 - INLINEFORM8 . It is expected that the performance of pun location drops, since such position features are able to capture the interesting property that a pun tends to appear in the second half of a sentence. While such knowledge is helpful for the location task, interestingly, a model without position knowledge yields improved performance on the pun detection task. One possible reason is that detecting whether a sentence contains a pun is not concerned with such word position information. Additionally, we conduct experiments over sentences containing a pun only, namely 1,607 and 1,271 instances from homographic and heterographic pun corpora separately. It can be regarded as a “pipeline” method where the classifier for pun detection is regarded as perfect. Following the prior work of BIBREF4 , we apply 10-fold cross validation. Since we are given that all input sentences contain a pun, we only report accumulated results on pun location, denoted as Pipeline in Table TABREF11 . Compared with our approaches, the performance of such an approach drops significantly. On the other hand, such a fact demonstrates that the two task, detection and location of puns, can reinforce each other. These figures demonstrate the effectiveness of our sequence labeling method to detect and locate English puns in a joint manner. Error Analysis We studied the outputs from our system and make some error analysis. We found the errors can be broadly categorized into several types, and we elaborate them here. 1) Low word coverage: since the corpora are relatively small, there exist many unseen words in the test set. Learning the representations of such unseen words is challenging, which affects the model's performance. Such errors contribute around 40% of the total errors made by our system. 2) Detection errors: we found many errors are due to the model's inability to make correct pun detection. Such inability harms both pun detection and pun location. Although our approach based on the INLINEFORM0 tagging scheme yields relatively higher scores on the detection task, we still found that 40% of the incorrectly predicted instances fall into this group. 3) Short sentences: we found it was challenging for our model to make correct predictions when the given text is short. Consider the example “Superglue! Tom rejoined," here the word rejoined is the corresponding pun. However, it would be challenging to figure out the pun with such limited contextual information. Related Work Most existing systems address pun detection and location separately. BIBREF22 applied word sense knowledge to conduct pun detection. BIBREF24 trained a bidirectional RNN classifier for detecting homographic puns. Next, a knowledge-based approach is adopted to find the exact pun. Such a system is not applicable to heterographic puns. BIBREF28 applied Google n-gram and word2vec to make decisions. The phonetic distance via the CMU Pronouncing Dictionary is computed to detect heterographic puns. BIBREF10 used the hidden Markov model and a cyclic dependency network with rich features to detect and locate puns. BIBREF23 used a supervised approach to pun detection and a weakly supervised approach to pun location based on the position within the context and part of speech features. BIBREF25 proposed a rule-based system for pun location that scores candidate words according to eleven simple heuristics. Two systems are developed to conduct detection and location separately in the system known as UWAV BIBREF3 . The pun detector combines predictions from three classifiers. The pun locator considers word2vec similarity between every pair of words in the context and position to pinpoint the pun. The state-of-the-art system for homographic pun location is a neural method BIBREF4 , where the word senses are incorporated into a bidirectional LSTM model. This method only supports the pun location task on homographic puns. Another line of research efforts related to this work is sequence labeling, such as POS tagging, chunking, word segmentation and NER. The neural methods have shown their effectiveness in this task, such as BiLSTM-CNN BIBREF13 , GRNN BIBREF29 , LSTM-CRF BIBREF30 , LSTM-CNN-CRF BIBREF14 , LM-LSTM-CRF BIBREF15 . In this work, we combine pun detection and location tasks as a single sequence labeling problem. Inspired by the work of BIBREF15 , we also adopt a LSTM-CRF with character embeddings to make labeling decisions. Conclusion In this paper, we propose to perform pun detection and location tasks in a joint manner from a sequence labeling perspective. We observe that each text in our corpora contains a maximum of one pun. Hence, we design a novel tagging scheme to incorporate such a constraint. Such a scheme guarantees that there is a maximum of one word that will be tagged as a pun during the testing phase. We also found the interesting structural property such as the fact that most puns tend to appear at the second half of the sentences can be helpful for such a task, but was not explored in previous works. Furthermore, unlike many previous approaches, our approach, though simple, is generally applicable to both heterographic and homographic puns. Empirical results on the benchmark datasets prove the effectiveness of the proposed approach that the two tasks of pun detection and location can be addressed by a single model from a sequence labeling perspective. Future research includes the investigations on how to make use of richer semantic and linguistic information for detection and location of puns. Research on puns for other languages such as Chinese is still under-explored, which could also be an interesting direction for our future studies. Acknowledgments We would like to thank the three anonymous reviewers for their thoughtful and constructive comments. This work is supported by Singapore Ministry of Education Academic Research Fund (AcRF) Tier 2 Project MOE2017-T2-1-156, and is partially supported by SUTD project PIE-SGP-AI-2018-01. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What baselines do they compare with? Only give me the answer and do not output any other words.
Answer:
[ "They compare with the following models: by Pedersen (2017), by Pramanick and Das (2017), by Mikhalkova and Karyakin (2017), by Vadehra (2017), Indurthi and Oota (2017), by Vechtomova (2017), by (Cai et al., 2018), and CRF." ]
276
3,748
single_doc_qa
0713556ef7c34df2b41738622dde59f5
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section*{Dynamical Behaviour of $O$ in Lattice Gases} The dynamical behaviour of the anisotropic order parameter $m$ [see Eq.~\eqref{eq:def-m} in the Letter] following a quench to the critical point is well described by the Gaussian theory for all the three lattice gas models studied, $i.e.,$ driven lattice gas with either constant (IDLG) or random (RDLG) infinite drive and equilibrium lattice gas (LG). In other words, in the short-time regime, $m \sim t^{1/2}$ [see Eq. \eqref{eq:mt}] and the Binder cumulant $g$ of the lowest transverse mode [defined in Eq. \eqref{eq:binder}] is zero in this regime. The alternative order parameter $O,$ however, distinguishes between the driven (IDLG, RDLG) and the equilibrium (LG) lattice gases. In order to understand this, we first write the phenomenological scaling form for $O$, analogous to Eq. \eqref{eq:scalingass} in the Letter, \begin{eqnarray} O (t, L_{\parallel} ; S_\Delta) = L_{\parallel}^{-\beta/[\nu(1+\Delta)]} \tilde f_O (t/L_{\parallel}^{z/(1+\Delta)} ; S_\Delta).\quad \label{eq:Oscalingass} \end{eqnarray} We already remarked that, in the LG, this scaling form is not compatible with the prediction $O \sim t^{1/8} L_{\parallel}^{-1/2}$ of the Gaussian theory. However, following Ref. \cite{AS2002}, it can be argued that, at short times, the only dependence of $O$ on the system size $L_{\parallel}$ is of the form $O \sim L_\parallel^{-1/2}$ which is very well confirmed by numerical simulations. Accordingly, the generic behaviour of $O$ can be assumed to be \begin{eqnarray} O \sim t^{\alpha} L_\parallel^{-1/2}, \label{eq:O} \end{eqnarray} where $\alpha$ is a phenomenological exponent to be determined. This, along with Eq. \eqref{eq:Oscalingass}, implies $\tilde f_O(x) \sim x^{\alpha}.$ Comparing the finite-size behaviour in Eq.~\eqref{eq:O} with Eq.~\eqref{eq:Oscalingass} one actually infers, \begin{eqnarray} \alpha &=& \frac{1+ \Delta -2 \beta/\nu}{2 \, (4- \eta)}. \label{eq:alpha} \end{eqnarray} This equation, together with the hyperscaling relation $\Delta - 2 \beta/\nu= - \eta$ in two spatial dimensions, shows that the prediction $\alpha = 1/8$ of the Gaussian theory [see Eq. \eqref{eq:Ot}] can be obtained only when $\eta=0,$ which is the case for the IDLG (exactly) and the RDLG (approximately) but not for the LG. On the other hand, Eq.~\eqref{eq:alpha} predicts $\alpha = 1/10$ upon substituting the values of the critical exponents corresponding to the Ising universality class (LG). This is consistent with the numerical simulation results presented in the main text, see Fig. \ref{fig:ising}(b) therein. \begin{figure}[th] \vspace*{0.2 cm} \centering \includegraphics[width=10 cm]{./compare_binder.pdf} \caption{Comparison between the temporal evolution of the Binder cumulants $g$ corresponding to the $12^{th}$ transverse mode, $i.e.,$ with $n_\perp =12,$ in the LG (lowest curve), IDLG and RDLG (two upper curves) on a $32 \times 32$ lattice. \label{fig:b}} \label{fig:binder} \end{figure} The emergence of this new value $1/10$ of the exponent $\alpha$ must be traced back to the non-Gaussian nature of higher fluctuating modes in the LG. In fact, even though the lowest mode behaves identically in all the three models we considered, characterized by the same behaviour of $m$, higher modes show a significant difference in the non-driven case. To illustrate this, we measured the Binder cumulants of higher modes which is defined analogously to Eq.~(11), using transverse modes other than the first, i.e., with $\mu=\tilde \sigma(0,2 \pi n_\bot/L_\bot)$ and $n_\bot>1.$ Figure \ref{fig:b} compares the same for all the three lattice gases for the mode with $n_\perp =12$ on a $32 \times 32$ lattice. Clearly, the curve corresponding to the LG (lowest, blue) departs from Gaussian behaviour $g=0$ (in practice, $e.g.,$ $|g| \lesssim 0.005,$ corresponding to the shaded gray area) much earlier than it does for the IDLG or RDLG (two upper curves, red and green respectively). Accordingly, the different dynamical behaviour of $O$, which involves a sum over all modes, can be attributed to the non-Gaussian nature of the higher modes in the LG. Such a departure is not entirely surprising. In fact, for higher modes, mesoscopic descriptions such as the ones in Eqs. \eqref{eq:L-DLG} or \eqref{eq:g_evol} are not expected to hold, while the anisotropy at the microscopic level could be the mechanism leading to the Gaussianity of higher modes in the driven models. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the dynamical behavior of the anisotropic order parameter following a quench to the critical point? Only give me the answer and do not output any other words.
Answer:
[ "It is well described by the Gaussian theory." ]
276
1,292
single_doc_qa
c483967507564b679d524db79f88b133
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.<ref name="nytimes">Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008.</ref> Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives. In 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. Early life and education Born graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead. She then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the "Outstanding Senior" award and graduated as valedictorian of the class of 1964. Legal career Immediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz. Born's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice. Born was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first "Women and the Law" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench. During her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court. In 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated. In July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC). Born and the OTC derivatives market Born was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a "legal uncertainty" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would "stifle financial innovation" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies. In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, "I thought that LTCM was exactly what I had been worried about". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked "How many more failures do you think we'd have to have before some regulation in this area might be appropriate?" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that "the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: "Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did." Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999. The derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008. Born declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: "The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been." She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms. An October 2009 Frontline documentary titled "The Warning" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: "I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience." In 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. According to Caroline Kennedy, "Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right." One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated "I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know", adding that "I could have done much better. I could have made a difference" in response to her warnings. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk. Personal life Born is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time. References External links Attorney profile at Arnold & Porter Brooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video Profile at MarketsWiki Speeches and statements "Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market", before the House Committee On Banking And Financial Services, July 24, 1998. "The Lessons of Long Term Capital Management L.P.", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998. Interview: Brooksley Born for "PBS Frontline: The Warning", PBS, (streaming VIDEO 1 hour), October 20, 2009. Articles Manuel Roig-Franzia. "Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On", The Washington Post, May 26, 2009 Taibbi, Matt. "The Great American Bubble Machine", Rolling Stone'', July 9–23, 2009 1940 births American women lawyers Arnold & Porter people Clinton administration personnel Columbus School of Law faculty Commodity Futures Trading Commission personnel Heads of United States federal agencies Lawyers from San Francisco Living people Stanford Law School alumni 21st-century American women Stanford University alumni Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: When did Born resign as chairperson of the CFTC? Only give me the answer and do not output any other words.
Answer:
[ "June 1, 1999." ]
276
2,761
single_doc_qa
d1d1414725df401683a2cc209a1dc6d1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The challenges of imbalanced classification—in which the proportion of elements in each class for a classification task significantly differ—and of the ability to generalise on dissimilar data have remained important problems in Natural Language Processing (NLP) and Machine Learning in general. Popular NLP tasks including sentiment analysis, propaganda detection, and event extraction from social media are all examples of imbalanced classification problems. In each case the number of elements in one of the classes (e.g. negative sentiment, propagandistic content, or specific events discussed on social media, respectively) is significantly lower than the number of elements in the other classes. The recently introduced BERT language model for transfer learning BIBREF0 uses a deep bidirectional transformer architecture to produce pre-trained context-dependent embeddings. It has proven to be powerful in solving many NLP tasks and, as we find, also appears to handle imbalanced classification well, thus removing the need to use standard methods of data augmentation to mitigate this problem (see Section SECREF11 for related work and Section SECREF16 for analysis). BERT is credited with the ability to adapt to many tasks and data with very little training BIBREF0. However, we show that BERT fails to perform well when the training and test data are significantly dissimilar, as is the case with several tasks that deal with social and news data. In these cases, the training data is necessarily a subset of past data, while the model is likely to be used on future data which deals with different topics. This work addresses this problem by incorporating cost-sensitivity (Section SECREF19) into BERT. We test these methods by participating in the Shared Task on Fine-Grained Propaganda Detection for the 2nd Workshop on NLP for Internet Freedom, for which we achieve the second rank on sentence-level classification of propaganda, confirming the importance of cost-sensitivity when the training and test sets are dissimilar. Introduction ::: Detecting Propaganda The term `propaganda' derives from propagare in post-classical Latin, as in “propagation of the faith" BIBREF1, and thus has from the beginning been associated with an intentional and potentially multicast communication; only later did it become a pejorative term. It was pragmatically defined in the World War II era as “the expression of an opinion or an action by individuals or groups deliberately designed to influence the opinions or the actions of other individuals or groups with reference to predetermined ends" BIBREF2. For the philosopher and sociologist Jacques Ellul, however, in a society with mass communication, propaganda is inevitable and thus it is necessary to become more aware of it BIBREF3; but whether or not to classify a given strip of text as propaganda depends not just on its content but on its use on the part of both addressers and addressees BIBREF1, and this fact makes the automated detection of propaganda intrinsically challenging. Despite this difficulty, interest in automatically detecting misinformation and/or propaganda has gained significance due to the exponential growth in online sources of information combined with the speed with which information is shared today. The sheer volume of social interactions makes it impossible to manually check the veracity of all information being shared. Automation thus remains a potentially viable method of ensuring that we continue to enjoy the benefits of a connected world without the spread of misinformation through either ignorance or malicious intent. In the task introduced by BIBREF4, we are provided with articles tagged as propaganda at the sentence and fragment (or span) level and are tasked with making predictions on a development set followed by a final held-out test set. We note this gives us access to the articles in the development and test sets but not their labels. We participated in this task under the team name ProperGander and were placed 2nd on the sentence level classification task where we make use of our methods of incorporating cost-sensitivity into BERT. We also participated in the fragment level task and were placed 7th. The significant contributions of this work are: We show that common (`easy') methods of data augmentation for dealing with class imbalance do not improve base BERT performance. We provide a statistical method of establishing the similarity of datasets. We incorporate cost-sensitivity into BERT to enable models to adapt to dissimilar datasets. We release all our program code on GitHub and Google Colaboratory, so that other researchers can benefit from this work. Related work ::: Propaganda detection Most of the existing works on propaganda detection focus on identifying propaganda at the news article level, or even at the news outlet level with the assumption that each of the articles of the suspected propagandistic outlet are propaganda BIBREF5, BIBREF6. Here we study two tasks that are more fine-grained, specifically propaganda detection at the sentence and phrase (fragment) levels BIBREF4. This fine-grained setup aims to train models that identify linguistic propaganda techniques rather than distinguishing between the article source styles. BIBREF4 EMNLP19DaSanMartino were the first to propose this problem setup and release it as a shared task. Along with the released dataset, BIBREF4 proposed a multi-granularity neural network, which uses the deep bidirectional transformer architecture known as BERT, which features pre-trained context-dependent embeddings BIBREF0. Their system takes a joint learning approach to the sentence- and phrase-level tasks, concatenating the output representation of the less granular (sentence-level) task with the more fine-grained task using learned weights. In this work we also take the BERT model as the basis of our approach and focus on the class imbalance as well as the lack of similarity between training and test data inherent to the task. Related work ::: Class imbalance A common issue for many Natural Language Processing (NLP) classification tasks is class imbalance, the situation where one of the class categories comprises a significantly larger proportion of the dataset than the other classes. It is especially prominent in real-world datasets and complicates classification when the identification of the minority class is of specific importance. Models trained on the basis of minimising errors for imbalanced datasets tend to more frequently predict the majority class; achieving high accuracy in such cases can be misleading. Because of this, the macro-averaged F-score, chosen for this competition, is a more suitable metric as it weights the performance on each class equally. As class imbalance is a widespread issue, multiple techniques have been developed that help alleviate it BIBREF7, BIBREF8, by either adjusting the model (e.g. changing the performance metric) or changing the data (e.g. oversampling the minority class or undersampling the majority class). Related work ::: Class imbalance ::: Cost-sensitive learning Cost-sensitive classification can be used when the “cost” of mislabelling one class is higher than that of mislabelling other classes BIBREF9, BIBREF10. For example, the real cost to a bank of miscategorising a large fraudulent transaction as authentic is potentially higher than miscategorising (perhaps only temporarily) a valid transaction as fraudulent. Cost-sensitive learning tackles the issue of class imbalance by changing the cost function of the model such that misclassification of training examples from the minority class carries more weight and is thus more `expensive'. This is achieved by simply multiplying the loss of each example by a certain factor. This cost-sensitive learning technique takes misclassification costs into account during model training, and does not modify the imbalanced data distribution directly. Related work ::: Class imbalance ::: Data augmentation Common methods that tackle the problem of class imbalance by modifying the data to create balanced datasets are undersampling and oversampling. Undersampling randomly removes instances from the majority class and is only suitable for problems with an abundance of data. Oversampling means creating more minority class instances to match the size of the majority class. Oversampling methods range from simple random oversampling, i.e. repeating the training procedure on instances from the minority class, chosen at random, to the more complex, which involves constructing synthetic minority-class samples. Random oversampling is similar to cost-sensitive learning as repeating the sample several times makes the cost of its mis-classification grow proportionally. Kolomiyets et al. kolomiyets2011model, Zhang et al. zhang2015character, and Wang and Yang wang2015s perform data augmentation using synonym replacement, i.e. replacing random words in sentences with their synonyms or nearest-neighbor embeddings, and show its effectiveness on multiple tasks and datasets. Wei et al. wei2019eda provide a great overview of `easy' data augmentation (EDA) techniques for NLP, including synonym replacement as described above, and random deletion, i.e. removing words in the sentence at random with pre-defined probability. They show the effectiveness of EDA across five text classification tasks. However, they mention that EDA may not lead to substantial improvements when using pre-trained models. In this work we test this claim by comparing performance gains of using cost-sensitive learning versus two data augmentation methods, synonym replacement and random deletion, with a pre-trained BERT model. More complex augmentation methods include back-translation BIBREF11, translational data augmentation BIBREF12, and noising BIBREF13, but these are out of the scope of this study. Dataset The Propaganda Techniques Corpus (PTC) dataset for the 2019 Shared Task on Fine-Grained Propaganda consists of a training set of 350 news articles, consisting of just over 16,965 total sentences, in which specifically propagandistic fragments have been manually spotted and labelled by experts. This is accompanied by a development set (or dev set) of 61 articles with 2,235 total sentences, whose labels are maintained by the shared task organisers; and two months after the release of this data, the organisers released a test set of 86 articles and 3,526 total sentences. In the training set, 4,720 ($\sim 28\%$) of the sentences have been assessed as containing propaganda, with 12,245 sentences ($\sim 72 \%$) as non-propaganda, demonstrating a clear class imbalance. In the binary sentence-level classification (SLC) task, a model is trained to detect whether each and every sentence is either 'propaganda' or 'non-propaganda'; in the more challenging field-level classification (FLC) task, a model is trained to detect one of 18 possible propaganda technique types in spans of characters within sentences. These propaganda types are listed in BIBREF4 and range from those which might be recognisable at the lexical level (e.g. Name_Calling, Repetition), and those which would likely need to incorporate semantic understanding (Red_Herring, Straw_Man). For several example sentences from a sample document annotated with fragment-level classifications (FLC) (Figure FIGREF13). The corresponding sentence-level classification (SLC) labels would indicate that sentences 3, 4, and 7 are 'propaganda' while the the other sentences are `non-propaganda'. Dataset ::: Data Distribution One of the most interesting aspects of the data provided for this task is the notable difference between the training and the development/test sets. We emphasise that this difference is realistic and reflective of real world news data, in which major stories are often accompanied by the introduction of new terms, names, and even phrases. This is because the training data is a subset of past data while the model is to be used on future data which deals with different newsworthy topics. We demonstrate this difference statistically by using a method for finding the similarity of corpora suggested by BIBREF14. We use the Wilcoxon signed-rank test BIBREF15 which compares the frequency counts of randomly sampled elements from different datasets to determine if those datasets have a statistically similar distribution of elements. We implement this as follows. For each of the training, development and test sets, we extract all words (retaining the repeats) while ignoring a set of stopwords (identified through the Python Natural Language Toolkit). We then extract 10,000 samples (with replacements) for various pairs of these datasets (training, development, and test sets along with splits of each of these datasets). Finally, we use comparative word frequencies from the two sets to calculate the p-value using the Wilcoxon signed-rank test. Table TABREF15 provides the minimum and maximum p-values and their interpretations for ten such runs of each pair reported. With p-value less than 0.05, we show that the train, development and test sets are self-similar and also significantly different from each other. In measuring self-similarity, we split each dataset after shuffling all sentences. While this comparison is made at the sentence level (as opposed to the article level), it is consistent with the granularity used for propaganda detection, which is also at the sentence level. We also perform measurements of self similarity after splitting the data at the article level and find that the conclusions of similarity between the sets hold with a p-value threshold of 0.001, where p-values for similarity between the training and dev/test sets are orders of magnitude lower compared to self-similarity. Since we use random sampling we run this test 10 times and present the both the maximum and minimum p-values. We include the similarity between 25% of a dataset and the remaining 75% of that set because that is the train/test ratio we use in our experiments, further described in our methodology (Section SECREF4). This analysis shows that while all splits of each of the datasets are statistically similar, the training set (and the split of the training set that we use for experimentation) are significantly different from the development and test sets. While our analysis does show that the development and the test sets are dissimilar, we note (based on the p-values) that they are significantly more similar to each other than they are to the training set. Methodology We were provided with two tasks: (1) propaganda fragment-level identification (FLC) and (2) propagandistic sentence-level identification (SLC). While we develop systems for both tasks, our main focus is toward the latter. Given the differences between the training, development, and test sets, we focus on methods for generalising our models. We note that propaganda identification is, in general, an imbalanced binary classification problem as most sentences are not propagandistic. Due to the non-deterministic nature of fast GPU computations, we run each of our models three times and report the average of these three runs through the rest of this section. When picking the model to use for our final submission, we pick the model that performs best on the development set. When testing our models, we split the labelled training data into two non-overlapping parts: the first one, consisting of 75% of the training data is used to train models, whereas the other is used to test the effectiveness of the models. All models are trained and tested on the same split to ensure comparability. Similarly, to ensure that our models remain comparable, we continue to train on the same 75% of the training set even when testing on the development set. Once the best model is found using these methods, we train that model on all of the training data available before then submitting the results on the development set to the leaderboard. These results are detailed in the section describing our results (Section SECREF5). Methodology ::: Class Imbalance in Sentence Level Classification The sentence level classification task is an imbalanced binary classification problem that we address using BERT BIBREF0. We use BERTBASE, uncased, which consists of 12 self-attention layers, and returns a 768-dimension vector that representation a sentence. So as to make use of BERT for sentence classification, we include a fully connected layer on top of the BERT self-attention layers, which classifies the sentence embedding provided by BERT into the two classes of interest (propaganda or non-propaganda). We attempt to exploit various data augmentation techniques to address the problem of class imbalance. Table TABREF17 shows the results of our experiments for different data augmentation techniques when, after shuffling the training data, we train the model on 75% of the training data and test it on the remaining 25% of the training data and the development data. We observe that BERT without augmentation consistently outperforms BERT with augmentation in the experiments when the model is trained on 75% of the training data and evaluated on the rest, i.e trained and evaluated on similar data, coming from the same distribution. This is consistent with observations by Wei et al. wei2019eda that contextual word embeddings do not gain from data augmentation. The fact that we shuffle the training data prior to splitting it into training and testing subsets could imply that the model is learning to associate topic words, such as `Mueller', as propaganda. However, when we perform model evaluation using the development set, which is dissimilar to the training, we observe that synonym insertion and word dropping techniques also do not bring performance gains, while random oversampling increases performance over base BERT by 4%. Synonym insertion provides results very similar to base BERT, while random deletion harms model performance producing lower scores. We believe that this could be attributed to the fact that synonym insertion and random word dropping involve the introduction of noise to the data, while oversampling does not. As we are working with natural language data, this type of noise can in fact change the meaning of the sentence. Oversampling on the other hand purely increases the importance of the minority class by repeating training on the unchanged instances. So as to better understand the aspects of oversampling that contribute to these gains, we perform a class-wise performance analysis of BERT with/without oversampling. The results of these experiments (Table TABREF18) show that oversampling increases the overall recall while maintaining precision. This is achieved by significantly improving the recall of the minority class (propaganda) at the cost of the recall of the majority class. So far we have been able to establish that a) the training and test sets are dissimilar, thus requiring us to generalise our model, b) oversampling provides a method of generalisation, and c) oversampling does this while maintaining recall on the minority (and thus more interesting) class. Given this we explore alternative methods of increasing minority class recall without a significant drop in precision. One such method is cost-sensitive classification, which differs from random oversampling in that it provides a more continuous-valued and consistent method of weighting samples of imbalanced training data; for example, random oversampling will inevitably emphasise some training instances at the expense of others. We detail our methods of using cost-sensitive classification in the next section. Further experiments with oversampling might have provided insights into the relationships between these methods, which we leave for future exploration. Methodology ::: Cost-sensitive Classification As discussed in Section SECREF10, cost-sensitive classification can be performed by weighting the cost function. We increase the weight of incorrectly labelling a propagandistic sentence by altering the cost function of the training of the final fully connected layer of our model previously described in Section SECREF16. We make these changes through the use of PyTorch BIBREF16 which calculates the cross-entropy loss for a single prediction $x$, an array where the $j^{th}$ element represents the models prediction for class $j$, labelled with the class $class$ as given by Equation DISPLAY_FORM20. The cross-entropy loss given in Equation DISPLAY_FORM20 is modified to accommodate an array $weight$, the $i^{th}$ element of which represents the weight of the $i^{th}$ class, as described in Equation DISPLAY_FORM21. Intuitively, we increase the cost of getting the classification of an “important” class wrong and corresponding decrees the cost of getting a less important class wrong. In our case, we increase the cost of mislabelling the minority class which is “propaganda” (as opposed to “non-propaganda”). We expect the effect of this to be similar to that of oversampling, in that it is likely to enable us to increase the recall of the minority class thus resulting in the decrease in recall of the overall model while maintaining high precision. We reiterate that this specific change to a model results in increasing the model's ability to better identify elements belonging to the minority class in dissimilar datasets when using BERT. We explore the validity of this by performing several experiments with different weights assigned to the minority class. We note that in our experiments use significantly higher weights than the weights proportional to class frequencies in the training data, that are common in literature BIBREF17. Rather than directly using the class proportions of the training set, we show that tuning weights based on performance on the development set is more beneficial. Figure FIGREF22 shows the results of these experiments wherein we are able to maintain the precision on the subset of the training set used for testing while reducing its recall and thus generalising the model. The fact that the model is generalising on a dissimilar dataset is confirmed by the increase in the development set F1 score. We note that the gains are not infinite and that a balance must be struck based on the amount of generalisation and the corresponding loss in accuracy. The exact weight to use for the best transfer of classification accuracy is related to the dissimilarity of that other dataset and hence is to be obtained experimentally through hyperparameter search. Our experiments showed that a value of 4 is best suited for this task. We do not include the complete results of our experiments here due to space constraints but include them along with charts and program code on our project website. Based on this exploration we find that the best weights for this particular dataset are 1 for non-propaganda and 4 for propaganda and we use this to train the final model used to submit results to the leaderboard. We also found that adding Part of Speech tags and Named Entity information to BERT embeddings by concatenating these one-hot vectors to the BERT embeddings does not improve model performance. We describe these results in Section SECREF5. Methodology ::: Fragment-level classification (FLC) In addition to participating in the Sentence Level Classification task we also participate in the Fragment Level Classification task. We note that extracting fragments that are propagandistic is similar to the task of Named Entity Recognition, in that they are both span extraction tasks, and so use a BERT based model designed for this task - We build on the work by BIBREF18 which makes use of Continuous Random Field stacked on top of an LSTM to predict spans. This architecture is standard amongst state of the art models that perform span identification. While the same span of text cannot have multiple named entity labels, it can have different propaganda labels. We get around this problem by picking one of the labels at random. Additionally, so as to speed up training, we only train our model on those sentences that contain some propagandistic fragment. In hindsight, we note that both these decisions were not ideal and discuss what we might have otherwise done in Section SECREF7. Results In this section, we show our rankings on the leaderboard on the test set. Unlike the previous exploratory sections, in which we trained our model on part of the training set, we train models described in this section on the complete training set. Results ::: Results on the SLC task Our best performing model, selected on the basis of a systematic analysis of the relationship between cost weights and recall, places us second amongst the 25 teams that submitted their results on this task. We present our score on the test set alongside those of comparable teams in Table TABREF25. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 60.98% on a similar task although this reported score is not directly comparable to the results on this task because of the differences in testing sets. Results ::: Results on the FLC task We train the model described in Section SECREF23 on the complete training set before submitting to the leaderboard. Our best performing model was placed 7th amongst the 13 teams that submitted results for this task. We present our score on the test set alongside those of comparable teams in Table TABREF27. We note that the task description paper BIBREF4 describes a method of achieving an F1 score of 22.58% on a similar task although, this reported score is not directly comparable to the results on this task. One of the major setbacks to our method for identifying sentence fragments was the loss of training data as a result of randomly picking one label when the same fragment had multiple labels. This could have been avoided by training different models for each label and simply concatenating the results. Additionally, training on all sentences, including those that did not contain any fragments labelled as propagandistic would have likely improved our model performance. We intend to perform these experiments as part of our ongoing research. Issues of Decontextualization in Automated Propaganda Detection It is worth reflecting on the nature of the shared task dataset (PTC corpus) and its structural correspondence (or lack thereof) to some of the definitions of propaganda mentioned in the introduction. First, propaganda is a social phenomenon and takes place as an act of communication BIBREF19, and so it is more than a simple information-theoretic message of zeros and ones—it also incorporates an addresser and addressee(s), each in phatic contact (typically via broadcast media), ideally with a shared denotational code and contextual surround(s) BIBREF20. As such, a dataset of decontextualised documents with labelled sentences, devoid of authorial or publisher metadata, has taken us at some remove from even a simple everyday definition of propaganda. Our models for this shared task cannot easily incorporate information about the addresser or addressee; are left to assume a shared denotational code between author and reader (one perhaps simulated with the use of pre-trained word embeddings); and they are unaware of when or where the act(s) of propagandistic communication took place. This slipperiness is illustrated in our example document (Fig. FIGREF13): note that while Sentences 3 and 7, labelled as propaganda, reflect a propagandistic attitude on the part of the journalist and/or publisher, Sentence 4—also labelled as propaganda in the training data—instead reflects a “flag-waving" propagandistic attitude on the part of U.S. congressman Jeff Flake, via the conventions of reported speech BIBREF21. While reported speech often is signaled by specific morphosyntactic patterns (e.g. the use of double-quotes and “Flake said") BIBREF22, we argue that human readers routinely distinguish propagandistic reportage from the propagandastic speech acts of its subjects, and to conflate these categories in a propaganda detection corpus may contribute to the occurrence of false positives/negatives. Conclusions and Future Work In this work we have presented a method of incorporating cost-sensitivity into BERT to allow for better generalisation and additionally, we provide a simple measure of corpus similarity to determine when this method is likely to be useful. We intend to extend our analysis of the ability to generalise models to less similar data by experimenting on other datasets and models. We hope that the release of program code and documentation will allow the research community to help in this experimentation while exploiting these methods. Acknowledgements We would like to thank Dr Leandro Minku from the University of Birmingham for his insights into and help with the statistical analysis presented in this paper. This work was also partially supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1. Work by Elena Kochkina was partially supported by the Leverhulme Trust through the Bridges Programme and Warwick CDT for Urban Science & Progress under the EPSRC Grant Number EP/L016400/1. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which natural language(s) are studied in this paper? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "English" ]
276
5,710
single_doc_qa
08b7fb44ab0e4312b3a19fcc8bb45963
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction To appear in Proceedings of International Workshop on Health Intelligence (W3PHIAI) of the 34th AAAI Conference on Artificial Intelligence, 2020. Physician burnout is a growing concern, estimated to be experienced by at least 35% of physicians in the developing world and 50% in the United States BIBREF0. BIBREF1 found that for every hour physicians provide direct clinical facetime to patients, nearly two additional hours are spent on EHR (Electronic Health Records) and administrative or desk work. As per the study conducted by Massachusetts General Physicians Organization (MPGO) BIBREF2 and as reported by BIBREF3, the average time spent on administrative tasks increased from 23.7% in 2014 to 27.9% in 2017. Both the surveys found that time spent on administrative tasks was positively associated with higher likelihood of burnout. Top reasons under administrative burden include working on the ambulatory EHR, handling medication reconciliation (sometimes done by aids), medication renewals, and medical billing and coding. The majority of these reasons revolve around documentation of information exchanged between doctors and patients during the clinical encounters. Automatically extracting such clinical information BIBREF4, BIBREF5 can not only help alleviate the documentation burden on the physician, but also allow them to dedicate more time directly with patients. Among all the clinical information extraction tasks, Medication Regimen (Medication, dosage, and frequency) extraction is particularly interesting due to its ability to help doctors with medication orders cum renewals, medication reconciliation, potentially verifying the reconciliations for errors, and, other medication-centered EHR documentation tasks. In addition, the same information when provided to patients can help them with better recall of doctor instructions which might aid in compliance with the care plan. This is particularly important given that patients forget or wrongly recollect 40-80% BIBREF6 of what is discussed in the clinic, and accessing EHR data has its own challenges. Spontaneous clinical conversations happening between a doctor and a patient, have several distinguishing characteristics from a normal monologue or prepared speech: it involves multiple speakers with overlapping dialogues, covers a variety of speech patterns, and the vocabulary can range from colloquial to complex domain-specific language. With recent advancements in Conversational Speech Recognition BIBREF7 rendering the systems less prone to errors, the subsequent challenge of understanding and extracting relevant information from the conversations is receiving increasing research focus BIBREF4, BIBREF8. In this paper, we focus on local information extraction in transcribed clinical conversations. Specifically, we extract dosage (e.g. 5mg) and frequency (e.g. once a day) for the medications (e.g. aspirin) from these transcripts, collectively referred to as Medication Regimen (MR) extraction. The information extraction is local as we extract the information from a segment of the transcript and not the entire transcript since doing the latter is difficult owing to the long meandering nature of the conversations often with multiple medication regimens and care plans being discussed. The challenges associated with the Medication Regimen (MR) extraction task include understanding the spontaneous dialog with clinical vocabulary and understanding the relationship between different entities as the discussion can contain multiple medications and dosages (e.g. doctor revising a dosage or reviewing all the current medications). We frame this problem as a Question Answering (QA) task by generating questions using templates. We base the QA model on pointer-generator networks BIBREF9 augmented with Co-Attentions BIBREF10. In addition, we develop models combining QA and Information Extraction frameworks using multi-decoder (one each for dosage and frequency) architecture. Lack of availability of a large volume of data is a typical challenge in healthcare. A conversation corpus by itself is a rare commodity in the healthcare data space because of the cost and difficulty in handing (because of data privacy concerns). Moreover, transcribing and labeling the conversations is a costly process as it requires domain-specific medical annotation expertise. To address data shortage and improve the model performance, we investigate different high-performance contextual embeddings (ELMO BIBREF11, BERT BIBREF12 and ClinicalBERT BIBREF13), and pretrain the models on a clinical summarization task. We further investigate the effects of training data size on our models. On the MR extraction task, ELMo with encoder multi-decoder architecture and BERT with encoder-decoder with encoders pretrained on the summarization task perform the best. The best-performing models improve our baseline's dosage and frequency extractions ROUGE-1 F1 scores from 54.28 and 37.13 to 89.57 and 45.94, respectively. Using our models, we present the first fully automated system to extract MR tags from spontaneous doctor-patient conversations. We evaluate the system (using our best performing models) on the transcripts generated from Automatic Speech Recognition (ASR) APIs offered by Google and IBM. In Google ASR's transcripts, our best model obtained ROUGE-1 F1 of 71.75 for Dosage extraction (which in this specific case equals to the percentage of times dosage is correct, refer Metrics Section for more details) and 40.13 for Frequency extraction tasks. On qualitative evaluation, we find that for 73.58% of the medications the model can find the correct frequency. These results demonstrate that the research on NLP can be used effectively in a real clinical setting to benefit both doctors and patients Data Our dataset consists of a total of 6,693 real doctor-patient conversations recorded in a clinical setting using distant microphones of varying quality. The recordings have an average duration of 9min 28s and have a verbatim transcript of 1,500 words on average (written by the experts). Both the audio and the transcript are de-identified (by removing the identifying information) with digital zeros and [de-identified] tags, respectively. The sentences in the transcript are grounded to the audio with the timestamps of its first and last word. The transcript of the conversations are annotated with summaries and Medication Regimen tags (MR tags), both grounded using the timestamps of the sentences from the transcript deemed relevant by the expert annotators, refer to Table TABREF1. The transcript for a typical conversation can be quite long, and not easy for many of the high performing deep learning models to act on. Moreover, the medical information about a concept/condition/entity can change during the conversation after a significant time gap. For example, dosage of a medication can be different when discussing current medication the patient is on vs when they are prescribed a different dosage. Hence, we have annotations, that are grounded to a short segment of the transcript. The summaries (#words - $\mu = 9.7; \sigma = 10.1$) are medically relevant and local. The MR tags are also local and are of the form {Medication Name, Dosage, Frequency}. If dosage ($\mu = 2.0; \sigma = 0$) or frequency ($\mu = 2.1; \sigma = 1.07$) information for a medication is not present in a grounded sentence, the corresponding field in the MR tag will be marked as `none'. In the MR tags, Medication Name and Dosage (usually a quantity followed by its units) can be relatively easily extracted from the transcript except for the units of the dosage which is sometimes inferred. In contrast, due to high degree of linguistic variation with which Frequency is often expressed, extracting Frequency requires an additional inference step. For example, `take one in the morning and at noon' from the transcript is tagged as `twice a day' in the frequency tag, likewise `take it before sleeping' is tagged as `at night time'. Out of overall 6,693 files, we set aside a random sample of 423 files (denoted as $\mathcal {D}_{test}$) for final evaluation. The remaining 6,270 files are used for training with 80% train (5016), 10% validation (627), and 10% test (627) split. Overall, the 6,270 files contains 156,186 summaries and 32,000 MR tags out of which 8,654 MR tags contain values for at least one of the Dosage or Frequency, which we used for training to avoid overfitting (the remaining MR tags have both Dosage and Frequency as `none'). Note that we have two test datasets: `10% test' - used to evaluate all the models, and $\mathcal {D}_{test}$ - used to measure the performance of best performing models on ASR transcripts. Approach We frame the Medication Regimen extraction problem as a Question Answering (QA) task, which forms the basis for our first approach. It can also be considered as a specific inference or relation extract task, since we extract specific information about an entity (Medication Name), hence our second approach is at the intersection of Question Answering (QA) and Information Extraction (IE) domains. Both the approaches involve using a contiguous segment of the transcript and the Medication Name as input, to find/infer the medication's Dosage and Frequency. When testing the approaches mimicking real-world conditions, we extract Medication Name from the transcript separately using ontology, refer to SECREF19. In the first approach, we frame the MR task as a QA task and generate questions using the template: “What is the $ <$dosage/frequency$>$ for $<$Medication Name$>$". Here, we use an abstractive QA model based on pointer-generator networks BIBREF9 augmented with coattention encoder BIBREF10 (QA-PGNet). In the second approach, we frame the problem as a conditioned IE task, where the information extracted depends on an entity (Medication Name). Here, we use a multi-decoder pointer-generator network augmented with coattention encoder (Multi-decoder QA-PGNet). Instead of using templates to generate questions and using a single decoder to extract different types of information as in the QA approach (which might lead to performance degradation), here we consider separate decoders for extracting specific types of information about an entity $E$ (Medication Name). Approach ::: Pointer-generator Network (PGNet) The network is a sequence-to-sequence attention model that can both copy a word from the input $I$ containing $P$ word tokens or generate a word from its vocabulary $vocab$, to produce the output sequence. First, the tokens of the $I$ are converted to embeddings and are fed one-by-one to the encoder, a single bi-LSTM layer, which encodes the tokens in $I$ into a sequence of hidden states - $H=encoder(I)$, where $ H=[h_1...h_P]$. For each decoder time step $t$, in a loop, we compute, 1) attention $a_t$ (using the last decoder state $s_{t-1}$), over the input tokens $I$, and 2) the decoder state $s_t$ using $a_t$. Then, at each time step, using both $a_t$ and $s_t$ we can find the probability $P_t(w)$, of producing a word $w$ (from both $vocab$ and $I$). For convenience, we denote the attention and the decoder as $decoder_{pg}(H)=P(w)$, where $P(w)=[P_1(w)...P_T(w)]$. The output can then be decoded from $P(w)$, which is decoded until it produces an `end of output token' or the number of steps reach the maximum allowed limit. Approach ::: QA PGNet We first encode both the question - $H_Q = encoder(Q)$, and the input - $H_I = encoder(I)$, separately using encoders (with shared weights). Then, to condition $I$ on $Q$ (and vice versa), we use the coattention encoder BIBREF10 which attends to both the $I$ and $Q$ simultaneously to generates the coattention context - $C_D = coatt(H_I, H_Q)$. Finally, using the pointer-generator decoder we find the probability distribution of the output sequence - $P(w) = decoder_{pg}([H_I; C_D])$, which is then decoded to generate the answer sequence. Approach ::: Multi-decoder (MD) QA PGNet After encoding the inputs into $H_I$ and $H_E$, for extracting $K$ types of information about an entity in an IE fashion, we use the following multi-decoder (MD) setup: Predictions for each of the $K$ decoders are then decoded using $P^k(w)$. All the networks discussed above are trained using a negative log-likelihood loss for the target word at each time step and summed over all the decoder time steps. Experiments We initialized MR extraction models' vocabulary from the training dataset after removing words with a frequency lower than 30 in the dataset, resulting in 456 words. Our vocabulary is small because of the size of the dataset, hence we rely on the model's ability to copy words to produce the output effectively. In all our model variations, the embedding and the network's hidden dimension are set to be equal. The networks were trained with a learning rate of 0.0015, dropout of 0.5 on the embedding layer, normal gradient clipping set at 2, batch size of 8, and optimized with Adagrad BIBREF15 and the training was stopped using the $10\%$ validation dataset. Experiments ::: Data Processing We did the following basic preprocessing to our data, 1) added `none' to the beginning of the input utterance, so the network could point to it when there was no relevant information in the input, 2) filtered outliers with a large number of grounded transcript sentences ($>$150 words), and 3) converted all text to lower case. To improve performance, we 1) standardized all numbers (both digits and words) to words concatenated with a hyphen (e.g. 110 -$>$ one-hundred-ten), in both input and output, 2) removed units from Dosage as sometimes the units were not explicitly mentioned in the transcript segment but were written by the annotators using domain knowledge, 3) prepended all medication mentions with `rx-' tag, as this helps model's performance when multiple medications are discussed in a segment (in both input and output), and 4) when a transcript segment has multiple medications or dosages being discussed we randomly shuffle them (in both input and output) and create a new data point, to increases the number of training data points. Randomly shuffling the entities increases the number of training MR tags from 8,654 to 11,521. Based on the data statistics after data processing, we fixed the maximum encoder steps to 100, dosage decoder steps to 1, and frequency decoder steps to 3 (for both the QA and Multi-decoder QA models). Experiments ::: Metrics For the MR extraction task, we measure the ROUGE-1 scores BIBREF14 for both the Dosage and Frequency extraction tasks. It should be noted that since Dosage is a single word token (after processing), both the reference and hypothesis are a single token, making its ROUGE-1 F1, Precision and Recall scores equal and be equal to percentage of times we find the correct dosage for the medications. In our annotations, Frequency has conflicting tags (e.g. {`Once a day', `twice a day'} and `daily'), hence metrics like Exact Match will be erroneous. To address this issue, we use the ROUGE scores to compare different models on the 10% test dataset and we use qualitative evaluation to measure the top-performing models on $\mathcal {D}_{test}$. Experiments ::: Model variations We consider QA PGNet and Multi-decoder QA PGNet with lookup table embedding as baseline models and improve on the baselines with other variations described below. Apart from learning-based baselines, we also create two naive baselines, one each for the Dosage and Frequency extraction tasks. For Dosage extraction, the baseline we consider is `Nearest Number', where we take the number nearest to the Medication Name as the prediction, and `none' if no number is mentioned or if the Medication Name is not detected in the input. For Frequency extraction, the baseline we consider is `Random Top-3' where we predict a random Frequency tag, from top-3 most frequent ones from our dataset - {`none', `daily', `twice a day'}. Embedding: We developed different variations of our models with a simple lookup table embeddings learned from scratch and using high-performance contextual embeddings, which are ELMo BIBREF11, BERT BIBREF16 and ClinicalBERT BIBREF13 (trained and provided by the authors). Refer to Table TABREF5 for the performance comparisons. We derive embeddings from ELMo by learning a linear combination of its last three layer's hidden states (task-specific fine-tuning BIBREF11). Similarly, for BERT-based embeddings, we take a linear combination of the hidden states from its last four layers, as this combination performs best without increasing the size of the embeddings BIBREF16. Since BERT and ClinicalBERT use word-piece vocabulary and computes sub-word embeddings, we compute word-level embedding by averaging the corresponding sub-word tokens. ELMo and BERT embeddings both have 1024 dimensions, ClinicalBERT have 768 as it is based on BERT base model, and the lookup table have 128 – higher dimension models leads to overfitting. Pertaining Encoder: We trained the PGNet as a summarization task using the clinical summaries and used the trained model to initialize the encoders (and the embeddings) of the corresponding QA models. We use a vocab size of 4073 words, derived from the training dataset with a frequency threshold of 30 for the task. We trained the models using Adagrad optimizer with a learning rate of 0.015, normal gradient clipping set at 2 and trained for around 150000 iterations (stopped using validation dataset). On the summarization task PGNet obtained ROUGE-1 F1 scores of 41.42 with ELMo and 39.15 with BERT embeddings. We compare the effects of pretraining the model in Table: TABREF5, models with `pretrained encoder' had their encoders and embeddings pretrained with the summarization task. Results and Discussion ::: Difference in networks and approaches Embeddings: On Dosage extraction, in general, ELMo obtains better performance than BERT, refer to Table TABREF5. This could be because we concatenated the numbers with a hyphen, and as ELMo uses character-level tokens it can learn the tagging better than BERT, a similar observation is also noted in BIBREF17. On the other hand, on Frequency extraction, without pretraining, ELMo's performance is lagging by a big margin of $\sim $8.5 ROUGE-1 F1 compared to BERT-based embeddings. Although in cases without encoder pretraining, ClinicalBERT performed the best in the Frequency extraction task (by a small margin), in general, it does not perform as well as BERT. This could also be a reflection of the fact that the language and style of writing used in clinical notes is very different from the way doctors converse with patients and the embedding dimension difference. Lookup table embedding performed decently in the frequency extraction task but lags behind in the Dosage extraction task. From the metrics and qualitative inspection, we find that the Frequency extraction is an easier task than the Dosage extraction. This is because, in the conversations, frequency information usually occurs in isolation and near the medications, but a medication's dosage can occur 1) near other medication's dosages, 2) with previous dosages (when a dosage for a medication is revised), and 3) after a large number of words from the medication. Other Variations: Considering various models' performance (without pretraining) and the resource constraint, we choose ELMo and BERT embeddings to analyze the effects of pretraining the encoder. When the network's encoder (and embedding) is pretrained with the summarization task, we 1) see a small decrease in the average number of iterations required for training, 2) improvement in individual performances of all models for both the sub-tasks, and 3) get best performance metrics across all variations, refer to Table TABREF5. Both in terms of performance and the training speed, there is no clear winner between shared and multi-decoder approaches. Medication tagging and data augmentation increase the best-performing model's ROUGE-1 F1 score by $\sim $1.5 for the Dosage extraction task. We also measure the performance of Multitask Question Answering Network (MQAN) BIBREF18 a QA model trained by the authors on the Decathlon multitask challenge. Since MQAN was not trained to produce the output sequence in our MR tags, it would not be fair to compute ROUGE scores. Instead, we randomly sample the MQAN's predictions from the 10% test dataset and qualitatively evaluate it. From the evaluations, we find that MQAN can not distinguish between frequency and dosage, and mixed the answers. MQAN correctly predicted the dosage for 29.73% and frequency for 24.24% percent of the medications compared to 84.12% and 76.34% for the encoder pretrained BERT QA PGNet model trained on our dataset. This could be because of the difference in the training dataset, domain and the tasks in the Decathlon challenge compared to ours. Almost all our models perform better than the naive baselines and the ones using lookup table embeddings, and our best performing models outperform them significantly. Among all the variations, the best performing models are ELMo with Multi-decoder (Dosage extraction) and BERT with shared-decoder QA PGNet architecture with pretrained encoder (Frequency extraction). We choose these two models for our subsequent analysis. Results and Discussion ::: Breakdown of Performance We categorize the 10% test dataset into different categories based on the complexity and type of the data and analyze the breakdown of the system's performance in Table TABREF11. We breakdown the Frequency extraction into two categories, 1) None: ground truth Frequency tag is `none', and 2) NN (Not None): ground truth Frequency tag is not `none'. Similarly, the Dosage extraction into 5 categories, 1) None: ground truth dosage tag is `none', 2) MM (Multiple Medicine): input segment has more than one Medication mentioned, 3) MN (Multiple Numbers): input segment has more than one number present, and 4) NBM (Number between correct Dosage and Medicine) : between the Medication Name and the correct Dosage in the input segment there are other numbers present. Note that the categories of the Dosage extraction task are not exhaustive, and one tag can belong to multiple categories. From the performance breakdown of Dosage extraction task, we see that 1) the models predict `none' better than other categories, i.e., the models are correctly able to identify when a medication's dosage is absent, 2) there is performance dip in hard cases (MM, MN, and NBM), 3) the models are able to figure out the correct dosage (decently) for a medication even when there are multiple numbers/dosage present, and 4) the model struggles the most in the NBM category. The models' low performance in NBM could be because we have a comparatively lower number of examples to train in this category. The Frequency extraction task performs equally well when the tag is `none` or not. In most categories, we see an increase in performance when using pretrained encoders. Results and Discussion ::: Training Dataset Size We vary the number of MR tags used to train the model and analyze the model's performance when training the networks, using publicly available contextual embeddings, compared to using pretrained embeddings and encoder (pretrained on the summarization task). Out of the 5,016 files in the 80% train dataset only 2,476 have atleast one MR tag. Therefore, out of the 2476 files, we randomly choose 100, 500, and 1000 files and trained the best performing model variations to observe the performance differences, refer to Figure FIGREF12. For all these experiments we used the same vocabulary size (456), the same hyper/training parameters, and the same 10% test split of 627 files. As expected, we see that the encoder pretrained models have higher performance on all the different training data sizes, i.e., they achieve higher performance on a lower number of data points, refer to Figure FIGREF12. The difference, as expected, shrinks as the training data size increases. Results and Discussion ::: Evaluating on ASR transcripts To test the performance of our models on real-world conditions, we use commercially available ASR services (Google and IBM) to transcribe the $\mathcal {D}_{test}$ files and measure the performance of our models without assuming any annotations (except when calculating the metrics). It should be noted that this is not the case in our previous evaluations using `10% test' dataset where we use the segmentation information. For ground truth annotations on ASR transcripts, we aligned the MR tags from human written transcripts to the ASR transcript using their grounded timing information. Additionally, since ASR is prone to errors, during the alignment, if a medication from an MR tag is not recognized correctly in the ASR transcript, we remove the corresponding MR tag. In our evaluations, we use Google Cloud Speech-to-Text (G-STT) and IBM Watson Speech to Text (IBM-STT) as these were among the top-performing ASR APIs on medical speech BIBREF19 and were readily available to us. We used G-STT, with the `video model' with punctuation settings. Unlike our human written transcripts, the transcript provided by G-STT is not verbatim and does not have disfluencies. IBM-STT, on the other hand, does not give punctuation so we used the speaker changes to add end-of-sentence punctuation. In our $\mathcal {D}_{test}$ dataset, on initial study we see a Word Error Rate of $\sim $50% for the ASR APIs and this number is not accurate because, 1) of the de-identification, 2) disfluencies (verbatim) difference between the human written and ASR transcript, and 3) minor alignment differences between the audio and the ground truth transcript. During this evaluation, we followed the same preprocessing methods we used during training. Then, we auto segment the transcript into small contiguous segments similar to the grounded sentences in the annotations for tags extraction. To segment the transcript, we follow a simple procedure. First, we detected all the medications in a transcript using RxNorm BIBREF20 via string matching. For all the detected medications, we selected $2 \le x \le 5$ nearby sentences as the input to our model. We increased $x$ iteratively until we encountered a quantity entity – detected using spaCy's entity recognizer, and we set $x$ as 2 if we did not detect any entities in the range. We show the model's performance on ASR transcripts and human written transcripts with automatic segmentation, and human written transcripts with human (defined) segmentation, in Table TABREF18. The number of recognized medications in IBM-STT is only 95 compared to 725 (human written), we mainly consider the models' performance on G-STT's transcripts (343). On the Medications that were recognized correctly, the models can perform decently on ASR transcripts in comparison to human transcripts (within 5 points ROUGE-1 F1 for both tasks, refer to Table TABREF18). This shows that the models are robust to ASR variations discussed above. The lower performance compared to human transcripts is mainly due to incorrect recognition of Dosage and other medications in the same segments (changing the meaning of the text). By comparing the performance of the model on the human written transcripts with human (defined) segmentation and the same with auto segmentation, we see a 10 point drop in Dosage and 6 point drop in Frequency extraction tasks. This points out the need for more sophisticated segmentation algorithms. With G-STT, our best model obtained ROUGE-1 F1 of 71.75 (which equals to percentage of times dosage is correct in this case) for Dosage extraction and 40.13 for Frequency extraction tasks. To measure the percentage of times the correct frequency was extracted by the model, we qualitatively compared the extracted and predicted frequency. We find that for 73.58% of the medications the model can find the correct frequency from the transcripts. Conclusion In this paper, we explore the Medication Regimen (MR) extraction task of extracting dosage and frequency for the medications mentioned in a doctor-patient conversation transcript. We explore different variations of abstractive QA models and new architecture in the intersection of QA and IE frameworks and provide a comparative performance analysis of the methods along with other techniques like pretraining to improve the overall performance. Finally, we demonstrate the performance of our best-performing models by automatically extracting MR tags from spontaneous doctor-patient conversations (using commercially available ASR). Our best model can correctly extract the dosage for 71.75% (interpretation of ROUGE-1 score) and frequency for 73.58% (on qualitative evaluation) of the medications discussed in the transcripts generated using Google Speech-To-Text. In summary, we demonstrate that research on NLP can be translated into real-world clinical settings to realize its benefits for both doctors and patients. Using ASR transcripts in our training process to improve our performance on both the tasks and extending the medication regimen extraction network to extract other important medical information can be interesting lines of future work. Acknowledgements We thank: University of Pittsburgh Medical Center (UPMC), and Abridge AI Inc. for providing access to the de-identified data corpus; Dr. Shivdev Rao, a faculty member and practicing cardiologist in UPMC's Heart and Vascular Institute and Prof. Florian Metze, Associate Research Professor, Carnegie Mellon University for helpful discussions; Ben Schloss, Steven Coleman, and Deborah Osakue for data business development and annotation management. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Is the data de-identified? Only give me the answer and do not output any other words.
Answer:
[ "Yes", "Yes" ]
276
6,305
single_doc_qa
0a94ee09e3dd48a3bf72208e692e328b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The task of generating natural language descriptions of structured data (such as tables) BIBREF2 , BIBREF3 , BIBREF4 has seen a growth in interest with the rise of sequence to sequence models that provide an easy way of encoding tables and generating text from them BIBREF0 , BIBREF1 , BIBREF5 , BIBREF6 . For text generation tasks, the only gold standard metric is to show the output to humans for judging its quality, but this is too expensive to apply repeatedly anytime small modifications are made to a system. Hence, automatic metrics that compare the generated text to one or more reference texts are routinely used to compare models BIBREF7 . For table-to-text generation, automatic evaluation has largely relied on BLEU BIBREF8 and ROUGE BIBREF9 . The underlying assumption behind these metrics is that the reference text is gold-standard, i.e., it is the ideal target text that a system should generate. In practice, however, when datasets are collected automatically and heuristically, the reference texts are often not ideal. Figure FIGREF2 shows an example from the WikiBio dataset BIBREF0 . Here the reference contains extra information which no system can be expected to produce given only the associated table. We call such reference texts divergent from the table. We show that existing automatic metrics, including BLEU, correlate poorly with human judgments when the evaluation sets contain divergent references (§ SECREF36 ). For many table-to-text generation tasks, the tables themselves are in a pseudo-natural language format (e.g., WikiBio, WebNLG BIBREF6 , and E2E-NLG BIBREF10 ). In such cases we propose to compare the generated text to the underlying table as well to improve evaluation. We develop a new metric, PARENT (Precision And Recall of Entailed N-grams from the Table) (§ SECREF3 ). When computing precision, PARENT effectively uses a union of the reference and the table, to reward correct information missing from the reference. When computing recall, it uses an intersection of the reference and the table, to ignore extra incorrect information in the reference. The union and intersection are computed with the help of an entailment model to decide if a text n-gram is entailed by the table. We show that this method is more effective than using the table as an additional reference. Our main contributions are: Table-to-Text Generation We briefly review the task of generating natural language descriptions of semi-structured data, which we refer to as tables henceforth BIBREF11 , BIBREF12 . Tables can be expressed as set of records INLINEFORM0 , where each record is a tuple (entity, attribute, value). When all the records are about the same entity, we can truncate the records to (attribute, value) pairs. For example, for the table in Figure FIGREF2 , the records are {(Birth Name, Michael Dahlquist), (Born, December 22 1965), ...}. The task is to generate a text INLINEFORM1 which summarizes the records in a fluent and grammatical manner. For training and evaluation we further assume that we have a reference description INLINEFORM2 available for each table. We let INLINEFORM3 denote an evaluation set of tables, references and texts generated from a model INLINEFORM4 , and INLINEFORM5 , INLINEFORM6 denote the collection of n-grams of order INLINEFORM7 in INLINEFORM8 and INLINEFORM9 , respectively. We use INLINEFORM10 to denote the count of n-gram INLINEFORM11 in INLINEFORM12 , and INLINEFORM13 to denote the minimum of its counts in INLINEFORM14 and INLINEFORM15 . Our goal is to assign a score to the model, which correlates highly with human judgments of the quality of that model. PARENT PARENT evaluates each instance INLINEFORM0 separately, by computing the precision and recall of INLINEFORM1 against both INLINEFORM2 and INLINEFORM3 . Evaluation via Information Extraction BIBREF1 proposed to use an auxiliary model, trained to extract structured records from text, for evaluation. However, the extraction model presented in that work is limited to the closed-domain setting of basketball game tables and summaries. In particular, they assume that each table has exactly the same set of attributes for each entity, and that the entities can be identified in the text via string matching. These assumptions are not valid for the open-domain WikiBio dataset, and hence we train our own extraction model to replicate their evaluation scheme. Our extraction system is a pointer-generator network BIBREF19 , which learns to produce a linearized version of the table from the text. The network learns which attributes need to be populated in the output table, along with their values. It is trained on the training set of WikiBio. At test time we parsed the output strings into a set of (attribute, value) tuples and compare it to the ground truth table. The F-score of this text-to-table system was INLINEFORM0 , which is comparable to other challenging open-domain settings BIBREF20 . More details are included in the Appendix SECREF52 . Given this information extraction system, we consider the following metrics for evaluation, along the lines of BIBREF1 . Content Selection (CS): F-score for the (attribute, value) pairs extracted from the generated text compared to those extracted from the reference. Relation Generation (RG): Precision for the (attribute, value) pairs extracted from the generated text compared to those in the ground truth table. RG-F: Since our task emphasizes the recall of information from the table as well, we consider another variant which computes the F-score of the extracted pairs to those in the table. We omit the content ordering metric, since our extraction system does not align records to the input text. Experiments & Results In this section we compare several automatic evaluation metrics by checking their correlation with the scores assigned by humans to table-to-text models. Specifically, given INLINEFORM0 models INLINEFORM1 , and their outputs on an evaluation set, we show these generated texts to humans to judge their quality, and obtain aggregated human evaluation scores for all the models, INLINEFORM2 (§ SECREF33 ). Next, to evaluate an automatic metric, we compute the scores it assigns to each model, INLINEFORM3 , and check the Pearson correlation between INLINEFORM4 and INLINEFORM5 BIBREF21 . Data & Models Our main experiments are on the WikiBio dataset BIBREF0 , which is automatically constructed and contains many divergent references. In § SECREF47 we also present results on the data released as part of the WebNLG challenge. We developed several models of varying quality for generating text from the tables in WikiBio. This gives us a diverse set of outputs to evaluate the automatic metrics on. Table TABREF32 lists the models along with their hyperparameter settings and their scores from the human evaluation (§ SECREF33 ). Our focus is primarily on neural sequence-to-sequence methods since these are most widely used, but we also include a template-based baseline. All neural models were trained on the WikiBio training set. Training details and sample outputs are included in Appendices SECREF56 & SECREF57 . We divide these models into two categories and measure correlation separately for both the categories. The first category, WikiBio-Systems, includes one model each from the four families listed in Table TABREF32 . This category tests whether a metric can be used to compare different model families with a large variation in the quality of their outputs. The second category, WikiBio-Hyperparams, includes 13 different hyperparameter settings of PG-Net BIBREF19 , which was the best performing system overall. 9 of these were obtained by varying the beam size and length normalization penalty of the decoder network BIBREF23 , and the remaining 4 were obtained by re-scoring beams of size 8 with the information extraction model described in § SECREF4 . All the models in this category produce high quality fluent texts, and differ primarily on the quantity and accuracy of the information they express. Here we are testing whether a metric can be used to compare similar systems with a small variation in performance. This is an important use-case as metrics are often used to tune hyperparameters of a model. Human Evaluation We collected human judgments on the quality of the 16 models trained for WikiBio, plus the reference texts. Workers on a crowd-sourcing platform, proficient in English, were shown a table with pairs of generated texts, or a generated text and the reference, and asked to select the one they prefer. Figure FIGREF34 shows the instructions they were given. Paired comparisons have been shown to be superior to rating scales for comparing generated texts BIBREF24 . However, for measuring correlation the comparisons need to be aggregated into real-valued scores, INLINEFORM0 , for each of the INLINEFORM1 models. For this, we use Thurstone's method BIBREF22 , which assigns a score to each model based on how many times it was preferred over an alternative. The data collection was performed separately for models in the WikiBio-Systems and WikiBio-Hyperparams categories. 1100 tables were sampled from the development set, and for each table we got 8 different sentence pairs annotated across the two categories, resulting in a total of 8800 pairwise comparisons. Each pair was judged by one worker only which means there may be noise at the instance-level, but the aggregated system-level scores had low variance (cf. Table TABREF32 ). In total around 500 different workers were involved in the annotation. References were also included in the evaluation, and they received a lower score than PG-Net, highlighting the divergence in WikiBio. Compared Metrics Text only: We compare BLEU BIBREF8 , ROUGE BIBREF9 , METEOR BIBREF18 , CIDEr and CIDEr-D BIBREF25 using their publicly available implementations. Information Extraction based: We compare the CS, RG and RG-F metrics discussed in § SECREF4 . Text & Table: We compare a variant of BLEU, denoted as BLEU-T, where the values from the table are used as additional references. BLEU-T draws inspiration from iBLEU BIBREF26 but instead rewards n-grams which match the table rather than penalizing them. For PARENT, we compare both the word-overlap model (PARENT-W) and the co-occurrence model (PARENT-C) for determining entailment. We also compare versions where a single INLINEFORM0 is tuned on the entire dataset to maximize correlation with human judgments, denoted as PARENT*-W/C. Correlation Comparison We use bootstrap sampling (500 iterations) over the 1100 tables for which we collected human annotations to get an idea of how the correlation of each metric varies with the underlying data. In each iteration, we sample with replacement, tables along with their references and all the generated texts for that table. Then we compute aggregated human evaluation and metric scores for each of the models and compute the correlation between the two. We report the average correlation across all bootstrap samples for each metric in Table TABREF37 . The distribution of correlations for the best performing metrics are shown in Figure FIGREF38 . Table TABREF37 also indicates whether PARENT is significantly better than a baseline metric. BIBREF21 suggest using the William's test for this purpose, but since we are computing correlations between only 4/13 systems at a time, this test has very weak power in our case. Hence, we use the bootstrap samples to obtain a INLINEFORM0 confidence interval of the difference in correlation between PARENT and any other metric and check whether this is above 0 BIBREF27 . Correlations are higher for the systems category than the hyperparams category. The latter is a more difficult setting since very similar models are compared, and hence the variance of the correlations is also high. Commonly used metrics which only rely on the reference (BLEU, ROUGE, METEOR, CIDEr) have only weak correlations with human judgments. In the hyperparams category, these are often negative, implying that tuning models based on these may lead to selecting worse models. BLEU performs the best among these, and adding n-grams from the table as references improves this further (BLEU-T). Among the extractive evaluation metrics, CS, which also only relies on the reference, has poor correlation in the hyperparams category. RG-F, and both variants of the PARENT metric achieve the highest correlation for both settings. There is no significant difference among these for the hyperparams category, but for systems, PARENT-W is significantly better than the other two. While RG-F needs a full information extraction pipeline in its implementation, PARENT-C only relies on co-occurrence counts, and PARENT-W can be used out-of-the-box for any dataset. To our knowledge, this is the first rigorous evaluation of using information extraction for generation evaluation. On this dataset, the word-overlap model showed higher correlation than the co-occurrence model for entailment. In § SECREF47 we will show that for the WebNLG dataset, where more paraphrasing is involved between the table and text, the opposite is true. Lastly, we note that the heuristic for selecting INLINEFORM0 is sufficient to produce high correlations for PARENT, however, if human annotations are available, this can be tuned to produce significantly higher correlations (PARENT*-W/C). Analysis In this section we further analyze the performance of PARENT-W under different conditions, and compare to the other best metrics from Table TABREF37 . To study the correlation as we vary the number of divergent references, we also collected binary labels from workers for whether a reference is entailed by the corresponding table. We define a reference as entailed when it mentions only information which can be inferred from the table. Each table and reference pair was judged by 3 independent workers, and we used the majority vote as the label for that pair. Overall, only INLINEFORM0 of the references were labeled as entailed by the table. Fleiss' INLINEFORM1 was INLINEFORM2 , which indicates a fair agreement. We found the workers sometimes disagreed on what information can be reasonably entailed by the table. Figure FIGREF40 shows the correlations as we vary the percent of entailed examples in the evaluation set of WikiBio. Each point is obtained by fixing the desired proportion of entailed examples, and sampling subsets from the full set which satisfy this proportion. PARENT and RG-F remain stable and show a high correlation across the entire range, whereas BLEU and BLEU-T vary a lot. In the hyperparams category, the latter two have the worst correlation when the evaluation set contains only entailed examples, which may seem surprising. However, on closer examination we found that this subset tends to omit a lot of information from the tables. Systems which produce more information than these references are penalized by BLEU, but not in the human evaluation. PARENT overcomes this issue by measuring recall against the table in addition to the reference. We check how different components in the computation of PARENT contribute to its correlation to human judgments. Specifically, we remove the probability INLINEFORM0 of an n-gram INLINEFORM1 being entailed by the table from Eqs. EQREF19 and EQREF23 . The average correlation for PARENT-W drops to INLINEFORM5 in this case. We also try a variant of PARENT with INLINEFORM6 , which removes the contribution of Table Recall (Eq. EQREF22 ). The average correlation is INLINEFORM7 in this case. With these components, the correlation is INLINEFORM8 , showing that they are crucial to the performance of PARENT. BIBREF28 point out that hill-climbing on an automatic metric is meaningless if that metric has a low instance-level correlation to human judgments. In Table TABREF46 we show the average accuracy of the metrics in making the same judgments as humans between pairs of generated texts. Both variants of PARENT are significantly better than the other metrics, however the best accuracy is only INLINEFORM0 for the binary task. This is a challenging task, since there are typically only subtle differences between the texts. Achieving higher instance-level accuracies will require more sophisticated language understanding models for evaluation. WebNLG Dataset To check how PARENT correlates with human judgments when the references are elicited from humans (and less likely to be divergent), we check its correlation with the human ratings provided for the systems competing in the WebNLG challenge BIBREF6 . The task is to generate text describing 1-5 RDF triples (e.g. John E Blaha, birthPlace, San Antonio), and human ratings were collected for the outputs of 9 participating systems on 223 instances. These systems include a mix of pipelined, statistical and neural methods. Each instance has upto 3 reference texts associated with the RDF triples, which we use for evaluation. The human ratings were collected on 3 distinct aspects – grammaticality, fluency and semantics, where semantics corresponds to the degree to which a generated text agrees with the meaning of the underlying RDF triples. We report the correlation of several metrics with these ratings in Table TABREF48 . Both variants of PARENT are either competitive or better than the other metrics in terms of the average correlation to all three aspects. This shows that PARENT is applicable for high quality references as well. While BLEU has the highest correlation for the grammar and fluency aspects, PARENT does best for semantics. This suggests that the inclusion of source tables into the evaluation orients the metric more towards measuring the fidelity of the content of the generation. A similar trend is seen comparing BLEU and BLEU-T. As modern neural text generation systems are typically very fluent, measuring their fidelity is of increasing importance. Between the two entailment models, PARENT-C is better due to its higher correlation with the grammaticality and fluency aspects. The INLINEFORM0 parameter in the calculation of PARENT decides whether to compute recall against the table or the reference (Eq. EQREF22 ). Figure FIGREF50 shows the distribution of the values taken by INLINEFORM1 using the heuristic described in § SECREF3 for instances in both WikiBio and WebNLG. For WikiBio, the recall of the references against the table is generally low, and hence the recall of the generated text relies more on the table. For WebNLG, where the references are elicited from humans, this recall is much higher (often INLINEFORM2 ), and hence the recall of the generated text relies more on the reference. Related Work Over the years several studies have evaluated automatic metrics for measuring text generation performance BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 , BIBREF35 . The only consensus from these studies seems to be that no single metric is suitable across all tasks. A recurring theme is that metrics like BLEU and NIST BIBREF36 are not suitable for judging content quality in NLG. Recently, BIBREF37 did a comprehensive study of several metrics on the outputs of state-of-the-art NLG systems, and found that while they showed acceptable correlation with human judgments at the system level, they failed to show any correlation at the sentence level. Ours is the first study which checks the quality of metrics when table-to-text references are divergent. We show that in this case even system level correlations can be unreliable. Hallucination BIBREF38 , BIBREF39 refers to when an NLG system generates text which mentions extra information than what is present in the source from which it is generated. Divergence can be viewed as hallucination in the reference text itself. PARENT deals with hallucination by discounting n-grams which do not overlap with either the reference or the table. PARENT draws inspiration from iBLEU BIBREF26 , a metric for evaluating paraphrase generation, which compares the generated text to both the source text and the reference. While iBLEU penalizes texts which match the source, here we reward such texts since our task values accuracy of generated text more than the need for paraphrasing the tabular content BIBREF40 . Similar to SARI for text simplification BIBREF41 and Q-BLEU for question generation BIBREF42 , PARENT falls under the category of task-specific metrics. Conclusions We study the automatic evaluation of table-to-text systems when the references diverge from the table. We propose a new metric, PARENT, which shows the highest correlation with humans across a range of settings with divergent references in WikiBio. We also perform the first empirical evaluation of information extraction based metrics BIBREF1 , and find RG-F to be effective. Lastly, we show that PARENT is comparable to the best existing metrics when references are elicited by humans on the WebNLG data. Acknowledgements Bhuwan Dhingra is supported by a fellowship from Siemens, and by grants from Google. We thank Maruan Al-Shedivat, Ian Tenney, Tom Kwiatkowski, Michael Collins, Slav Petrov, Jason Baldridge, David Reitter and other members of the Google AI Language team for helpful discussions and suggestions. We thank Sam Wiseman for sharing data for an earlier version of this paper. We also thank the anonymous reviewers for their feedback. Information Extraction System For evaluation via information extraction BIBREF1 we train a model for WikiBio which accepts text as input and generates a table as the output. Tables in WikiBio are open-domain, without any fixed schema for which attributes may be present or absent in an instance. Hence we employ the Pointer-Generator Network (PG-Net) BIBREF19 for this purpose. Specifically, we use a sequence-to-sequence model, whose encoder and decoder are both single-layer bi-directional LSTMs. The decoder is augmented with an attention mechanism over the states of the encoder. Further, it also uses a copy mechanism to optionally copy tokens directly from the source text. We do not use the coverage mechanism of BIBREF19 since that is specific to the task of summarization they study. The decoder is trained to produce a linearized version of the table where the rows and columns are flattened into a sequence, and separate by special tokens. Figure FIGREF53 shows an example. Clearly, since the references are divergent, the model cannot be expected to produce the entire table, and we see some false information being hallucinated after training. Nevertheless, as we show in § SECREF36 , this system can be used for evaluating generated texts. After training, we can parse the output sequence along the special tokens INLINEFORM0 R INLINEFORM1 and INLINEFORM2 C INLINEFORM3 to get a set of (attribute, value) pairs. Table TABREF54 shows the precision, recall and F-score of these extracted pairs against the ground truth tables, where the attributes and values are compared using an exact string match. Hyperparameters After tuning we found the same set of hyperparameters to work well for both the table-to-text PG-Net, and the inverse information extraction PG-Net. The hidden state size of the biLSTMs was set to 200. The input and output vocabularies were set to 50000 most common words in the corpus, with additional special symbols for table attribute names (such as “birth-date”). The embeddings of the tokens in the vocabulary were initialized with Glove BIBREF43 . Learning rate of INLINEFORM0 was used during training, with the Adam optimizer, and a dropout of INLINEFORM1 was also applied to the outputs of the biLSTM. Models were trained till the loss on the dev set stopped dropping. Maximum length of a decoded text was set to 40 tokens, and that of the tables was set to 120 tokens. Various beam sizes and length normalization penalties were applied for the table-to-text system, which are listed in the main paper. For the information extraction system, we found a beam size of 8 and no length penalty to produce the highest F-score on the dev set. Sample Outputs Table TABREF55 shows some sample references and the corresponding predictions from the best performing model, PG-Net for WikiBio. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Ngrams of which length are aligned using PARENT? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "Answer with content missing: (Parent subsections) combine precisions for n-gram orders 1-4" ]
276
4,969
single_doc_qa
6b518838fbd146e896d2e1b2375e2652
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How many sentences does the dataset contain? Only give me the answer and do not output any other words.
Answer:
[ "3606", "6946" ]
276
4,143
single_doc_qa
73651d67b5dc44f9aef91c4957bf0325
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Automatic classification of sentiment has mainly focused on categorizing tweets in either two (binary sentiment analysis) or three (ternary sentiment analysis) categories BIBREF0 . In this work we study the problem of fine-grained sentiment classification where tweets are classified according to a five-point scale ranging from VeryNegative to VeryPositive. To illustrate this, Table TABREF3 presents examples of tweets associated with each of these categories. Five-point scales are widely adopted in review sites like Amazon and TripAdvisor, where a user's sentiment is ordered with respect to its intensity. From a sentiment analysis perspective, this defines a classification problem with five categories. In particular, Sebastiani et al. BIBREF1 defined such classification problems whose categories are explicitly ordered to be ordinal classification problems. To account for the ordering of the categories, learners are penalized according to how far from the true class their predictions are. Although considering different scales, the various settings of sentiment classification are related. First, one may use the same feature extraction and engineering approaches to represent the text spans such as word membership in lexicons, morpho-syntactic statistics like punctuation or elongated word counts BIBREF2 , BIBREF3 . Second, one would expect that knowledge from one task can be transfered to the others and this would benefit the performance. Knowing that a tweet is “Positive” in the ternary setting narrows the classification decision between the VeryPositive and Positive categories in the fine-grained setting. From a research perspective this raises the question of whether and how one may benefit when tackling such related tasks and how one can transfer knowledge from one task to another during the training phase. Our focus in this work is to exploit the relation between the sentiment classification settings and demonstrate the benefits stemming from combining them. To this end, we propose to formulate the different classification problems as a multitask learning problem and jointly learn them. Multitask learning BIBREF4 has shown great potential in various domains and its benefits have been empirically validated BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 using different types of data and learning approaches. An important benefit of multitask learning is that it provides an elegant way to access resources developed for similar tasks. By jointly learning correlated tasks, the amount of usable data increases. For instance, while for ternary classification one can label data using distant supervision with emoticons BIBREF9 , there is no straightforward way to do so for the fine-grained problem. However, the latter can benefit indirectly, if the ternary and fine-grained tasks are learned jointly. The research question that the paper attempts to answer is the following: Can twitter sentiment classification problems, and fine-grained sentiment classification in particular, benefit from multitask learning? To answer the question, the paper brings the following two main contributions: (i) we show how jointly learning the ternary and fine-grained sentiment classification problems in a multitask setting improves the state-of-the-art performance, and (ii) we demonstrate that recurrent neural networks outperform models previously proposed without access to huge corpora while being flexible to incorporate different sources of data. Multitask Learning for Twitter Sentiment Classification In his work, Caruana BIBREF4 proposed a multitask approach in which a learner takes advantage of the multiplicity of interdependent tasks while jointly learning them. The intuition is that if the tasks are correlated, the learner can learn a model jointly for them while taking into account the shared information which is expected to improve its generalization ability. People express their opinions online on various subjects (events, products..), on several languages and in several styles (tweets, paragraph-sized reviews..), and it is exactly this variety that motivates the multitask approaches. Specifically for Twitter for instance, the different settings of classification like binary, ternary and fine-grained are correlated since their difference lies in the sentiment granularity of the classes which increases while moving from binary to fine-grained problems. There are two main decisions to be made in our approach: the learning algorithm, which learns a decision function, and the data representation. With respect to the former, neural networks are particularly suitable as one can design architectures with different properties and arbitrary complexity. Also, as training neural network usually relies on back-propagation of errors, one can have shared parts of the network trained by estimating errors on the joint tasks and others specialized for particular tasks. Concerning the data representation, it strongly depends on the data type available. For the task of sentiment classification of tweets with neural networks, distributed embeddings of words have shown great potential. Embeddings are defined as low-dimensional, dense representations of words that can be obtained in an unsupervised fashion by training on large quantities of text BIBREF10 . Concerning the neural network architecture, we focus on Recurrent Neural Networks (RNNs) that are capable of modeling short-range and long-range dependencies like those exhibited in sequence data of arbitrary length like text. While in the traditional information retrieval paradigm such dependencies are captured using INLINEFORM0 -grams and skip-grams, RNNs learn to capture them automatically BIBREF11 . To circumvent the problems with capturing long-range dependencies and preventing gradients from vanishing, the long short-term memory network (LSTM) was proposed BIBREF12 . In this work, we use an extended version of LSTM called bidirectional LSTM (biLSTM). While standard LSTMs access information only from the past (previous words), biLSTMs capture both past and future information effectively BIBREF13 , BIBREF11 . They consist of two LSTM networks, for propagating text forward and backwards with the goal being to capture the dependencies better. Indeed, previous work on multitask learning showed the effectiveness of biLSTMs in a variety of problems: BIBREF14 tackled sequence prediction, while BIBREF6 and BIBREF15 used biLSTMs for Named Entity Recognition and dependency parsing respectively. Figure FIGREF2 presents the architecture we use for multitask learning. In the top-left of the figure a biLSTM network (enclosed by the dashed line) is fed with embeddings INLINEFORM0 that correspond to the INLINEFORM1 words of a tokenized tweet. Notice, as discussed above, the biLSTM consists of two LSTMs that are fed with the word sequence forward and backwards. On top of the biLSTM network one (or more) hidden layers INLINEFORM2 transform its output. The output of INLINEFORM3 is led to the softmax layers for the prediction step. There are INLINEFORM4 softmax layers and each is used for one of the INLINEFORM5 tasks of the multitask setting. In tasks such as sentiment classification, additional features like membership of words in sentiment lexicons or counts of elongated/capitalized words can be used to enrich the representation of tweets before the classification step BIBREF3 . The lower part of the network illustrates how such sources of information can be incorporated to the process. A vector “Additional Features” for each tweet is transformed from the hidden layer(s) INLINEFORM6 and then is combined by concatenation with the transformed biLSTM output in the INLINEFORM7 layer. Experimental setup Our goal is to demonstrate how multitask learning can be successfully applied on the task of sentiment classification of tweets. The particularities of tweets are to be short and informal text spans. The common use of abbreviations, creative language etc., makes the sentiment classification problem challenging. To validate our hypothesis, that learning the tasks jointly can benefit the performance, we propose an experimental setting where there are data from two different twitter sentiment classification problems: a fine-grained and a ternary. We consider the fine-grained task to be our primary task as it is more challenging and obtaining bigger datasets, e.g. by distant supervision, is not straightforward and, hence we report the performance achieved for this task. Ternary and fine-grained sentiment classification were part of the SemEval-2016 “Sentiment Analysis in Twitter” task BIBREF16 . We use the high-quality datasets the challenge organizers released. The dataset for fine-grained classification is split in training, development, development_test and test parts. In the rest, we refer to these splits as train, development and test, where train is composed by the training and the development instances. Table TABREF7 presents an overview of the data. As discussed in BIBREF16 and illustrated in the Table, the fine-grained dataset is highly unbalanced and skewed towards the positive sentiment: only INLINEFORM0 of the training examples are labeled with one of the negative classes. Feature representation We report results using two different feature sets. The first one, dubbed nbow, is a neural bag-of-words that uses text embeddings to generate low-dimensional, dense representations of the tweets. To construct the nbow representation, given the word embeddings dictionary where each word is associated with a vector, we apply the average compositional function that averages the embeddings of the words that compose a tweet. Simple compositional functions like average were shown to be robust and efficient in previous work BIBREF17 . Instead of training embeddings from scratch, we use the pre-trained on tweets GloVe embeddings of BIBREF10 . In terms of resources required, using only nbow is efficient as it does not require any domain knowledge. However, previous research on sentiment analysis showed that using extra resources, like sentiment lexicons, can benefit significantly the performance BIBREF3 , BIBREF2 . To validate this and examine at which extent neural networks and multitask learning benefit from such features we evaluate the models using an augmented version of nbow, dubbed nbow+. The feature space of the latter, is augmented using 1,368 extra features consisting mostly of counts of punctuation symbols ('!?#@'), emoticons, elongated words and word membership features in several sentiment lexicons. Due to space limitations, for a complete presentation of these features, we refer the interested reader to BIBREF2 , whose open implementation we used to extract them. Evaluation measure To reproduce the setting of the SemEval challenges BIBREF16 , we optimize our systems using as primary measure the macro-averaged Mean Absolute Error ( INLINEFORM0 ) given by: INLINEFORM1 where INLINEFORM0 is the number of categories, INLINEFORM1 is the set of instances whose true class is INLINEFORM2 , INLINEFORM3 is the true label of the instance INLINEFORM4 and INLINEFORM5 the predicted label. The measure penalizes decisions far from the true ones and is macro-averaged to account for the fact that the data are unbalanced. Complementary to INLINEFORM6 , we report the performance achieved on the micro-averaged INLINEFORM7 measure, which is a commonly used measure for classification. The models To evaluate the multitask learning approach, we compared it with several other models. Support Vector Machines (SVMs) are maximum margin classification algorithms that have been shown to achieve competitive performance in several text classification problems BIBREF16 . SVM INLINEFORM0 stands for an SVM with linear kernel and an one-vs-rest approach for the multi-class problem. Also, SVM INLINEFORM1 is an SVM with linear kernel that employs the crammer-singer strategy BIBREF18 for the multi-class problem. Logistic regression (LR) is another type of linear classification method, with probabilistic motivation. Again, we use two types of Logistic Regression depending on the multi-class strategy: LR INLINEFORM2 that uses an one-vs-rest approach and multinomial Logistic Regression also known as the MaxEnt classifier that uses a multinomial criterion. Both SVMs and LRs as discussed above treat the problem as a multi-class one, without considering the ordering of the classes. For these four models, we tuned the hyper-parameter INLINEFORM0 that controls the importance of the L INLINEFORM1 regularization part in the optimization problem with grid-search over INLINEFORM2 using 10-fold cross-validation in the union of the training and development data and then retrained the models with the selected values. Also, to account for the unbalanced classification problem we used class weights to penalize more the errors made on the rare classes. These weights were inversely proportional to the frequency of each class. For the four models we used the implementations of Scikit-learn BIBREF19 . For multitask learning we use the architecture shown in Figure FIGREF2 , which we implemented with Keras BIBREF20 . The embeddings are initialized with the 50-dimensional GloVe embeddings while the output of the biLSTM network is set to dimension 50. The activation function of the hidden layers is the hyperbolic tangent. The weights of the layers were initialized from a uniform distribution, scaled as described in BIBREF21 . We used the Root Mean Square Propagation optimization method. We used dropout for regularizing the network. We trained the network using batches of 128 examples as follows: before selecting the batch, we perform a Bernoulli trial with probability INLINEFORM0 to select the task to train for. With probability INLINEFORM1 we pick a batch for the fine-grained sentiment classification problem, while with probability INLINEFORM2 we pick a batch for the ternary problem. As shown in Figure FIGREF2 , the error is backpropagated until the embeddings, that we fine-tune during the learning process. Notice also that the weights of the network until the layer INLINEFORM3 are shared and therefore affected by both tasks. To tune the neural network hyper-parameters we used 5-fold cross validation. We tuned the probability INLINEFORM0 of dropout after the hidden layers INLINEFORM1 and for the biLSTM for INLINEFORM2 , the size of the hidden layer INLINEFORM3 and the probability INLINEFORM4 of the Bernoulli trials from INLINEFORM5 . During training, we monitor the network's performance on the development set and apply early stopping if the performance on the validation set does not improve for 5 consecutive epochs. Experimental results Table TABREF9 illustrates the performance of the models for the different data representations. The upper part of the Table summarizes the performance of the baselines. The entry “Balikas et al.” stands for the winning system of the 2016 edition of the challenge BIBREF2 , which to the best of our knowledge holds the state-of-the-art. Due to the stochasticity of training the biLSTM models, we repeat the experiment 10 times and report the average and the standard deviation of the performance achieved. Several observations can be made from the table. First notice that, overall, the best performance is achieved by the neural network architecture that uses multitask learning. This entails that the system makes use of the available resources efficiently and improves the state-of-the-art performance. In conjunction with the fact that we found the optimal probability INLINEFORM0 , this highlights the benefits of multitask learning over single task learning. Furthermore, as described above, the neural network-based models have only access to the training data as the development are hold for early stopping. On the other hand, the baseline systems were retrained on the union of the train and development sets. Hence, even with fewer resources available for training on the fine-grained problem, the neural networks outperform the baselines. We also highlight the positive effect of the additional features that previous research proposed. Adding the features both in the baselines and in the biLSTM-based architectures improves the INLINEFORM1 scores by several points. Lastly, we compare the performance of the baseline systems with the performance of the state-of-the-art system of BIBREF2 . While BIBREF2 uses n-grams (and character-grams) with INLINEFORM0 , the baseline systems (SVMs, LRs) used in this work use the nbow+ representation, that relies on unigrams. Although they perform on par, the competitive performance of nbow highlights the potential of distributed representations for short-text classification. Further, incorporating structure and distributed representations leads to the gains of the biLSTM network, in the multitask and single task setting. Similar observations can be drawn from Figure FIGREF10 that presents the INLINEFORM0 scores. Again, the biLSTM network with multitask learning achieves the best performance. It is also to be noted that although the two evaluation measures are correlated in the sense that the ranking of the models is the same, small differences in the INLINEFORM1 have large effect on the scores of the INLINEFORM2 measure. Conclusion In this paper, we showed that by jointly learning the tasks of ternary and fine-grained classification with a multitask learning model, one can greatly improve the performance on the second. This opens several avenues for future research. Since sentiment is expressed in different textual types like tweets and paragraph-sized reviews, in different languages (English, German, ..) and in different granularity levels (binary, ternary,..) one can imagine multitask approaches that could benefit from combining such resources. Also, while we opted for biLSTM networks here, one could use convolutional neural networks or even try to combine different types of networks and tasks to investigate the performance effect of multitask learning. Lastly, while our approach mainly relied on the foundations of BIBREF4 , the internal mechanisms and the theoretical guarantees of multitask learning remain to be better understood. Acknowledgements This work is partially supported by the CIFRE N 28/2015. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: By how much did they improve? Only give me the answer and do not output any other words.
Answer:
[ "They decrease MAE in 0.34" ]
276
3,537
single_doc_qa
eda3c610eea449ccaa462e757a4efd91
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Introduction} Underwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\protect\footnote{Underwater Robot Professional Contest: {\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest. Therefore, researchers \cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways, \begin{table}[t] \renewcommand\tabcolsep{3.5pt} \caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \emph{3} in Class means three types of creatures are labeled, \emph{i.e.,} holothurian, echinus, and scallop. \emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.} \centering \begin{tabular}{|l|c|c|c|c|c|} \hline Dataset&Train&Test&Class&Retention&Year \\ \hline URPC2017&17,655&985*&3&15\%&2017 \\ \hline URPC2018&2,901&800*&4&99\%&2018 \\ \hline URPC2019&4,757&1,029*&4&86\%&2019 \\ \hline URPC2020$_{ZJ}$&5,543&2,000*&4&82\%&2020 \\ \hline URPC2020$_{DL}$&6,575&2,400*&4&80\%&2020 \\ \hline UDD&1,827&400&3&84\%&2020 \\ \hline \end{tabular} \label{Info} \end{table} \begin{figure*}[htbp] \begin{center} \includegraphics[width=1\linewidth]{example.pdf} \end{center} \caption{Examples in DUO, which show a variety of scenarios in underwater environments.} \label{exam} \end{figure*} causing there is no unified benchmark to compare the performance of different algorithms. In terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance. For other URPC datasets, the latter also includes images from the former, \emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness. Therefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development. To address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\emph{i.e.,} holothurian, echinus, scallop, and starfish). Besides, based on the MMDetection$\protect\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\bf https://github.com/open-mmlab/mmdetection}}$ \cite{chen2019mmdetection} framework, we also provide a \emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\protect\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon. In summary, the contributions of this paper can be listed as follows. $\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes. $\bullet$ We provide a corresponding benchmark of \emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications. \pagestyle{empty} \section{Background} In the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\protect\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping. The datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \ref{Info}. {\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical. {\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\times$480, 704$\times$576, 720$\times$405, and 1,920$\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment. {\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests. {\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf UDD \cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{pie.pdf} \end{center} \caption{The proportion distribution of the objects in DUO.} \label{pie} \end{figure} \begin{figure*} \centering \subfigure[]{\includegraphics[width=3.45in]{imagesize.pdf}} \subfigure[]{\includegraphics[width=3.45in]{numInstance.pdf}} \caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.} \label{sum} \end{figure*} \section{Proposed Dataset} \subsection{Image Deduplicating} As we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario. After deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\%, which means that there are only a few similar images in the new dataset. Figure \ref{exam} shows that our dataset also retains various underwater scenes. \subsection{Image Re-annotation} Due to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\emph{i.e.,} GFL \cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\bf the fine annotation}. Notably, we adopt the COCO \cite{Belongie2014} annotation form as the final format. \subsection{Dataset Statistics} {\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities. {\bf The distribution of instance sizes}: Figure \ref{sum}(a) shows an instance size distribution of DUO. \emph{Percent of image size} represents the ratio of object area to image area, and \emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\% to 1.5\% of the image area. {\bf The instance number per image}: Figure \ref{sum}(b) illustrates the number of categories per image for DUO. \emph{Number of instances} represents the number of objects one image has, and \emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image. {\bf Summary}: In general, smaller objects are harder to detect. For PASCAL VOC \cite{Everingham2007The} or COCO \cite{Belongie2014}, roughly 50\% of all objects occupy no more than 10\% of the image itself, and others evenly occupy from 10\% to 100\%. In the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\% of the image size. Therefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking. \section{Benchmark} Because the aim of underwater object detection for robot picking is to find {\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \ref{ben}. \subsection{Evaluation Metrics} Here we adopt the standard COCO metrics (mean average precision, \emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution. {\bf AP} -- mAP at IoU=0.50:0.05:0.95. {\bf AP$_{50}$} -- mAP at IoU=0.50. {\bf AP$_{75}$} -- mAP at IoU=0.75. {\bf AP$_{S}$} -- {\bf AP} for small objects of area smaller than 32$^{2}$. {\bf AP$_{M}$} -- {\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$. {\bf AP$_{L}$} -- {\bf AP} for large objects of area bigger than 96$^{2}$. {\bf AP$_{Ho}$} -- {\bf AP} in holothurian. {\bf AP$_{Ec}$} -- {\bf AP} in echinus. {\bf AP$_{Sc}$} -- {\bf AP} in scallop. {\bf AP$_{St}$} -- {\bf AP} in starfish. For the efficiency evaluation, we provide three metrics: {\bf Param.} -- The parameters of a detector. {\bf FLOPs} -- Floating-point operations per second. {\bf FPS} -- Frames per second. Notably, {\bf FLOPs} is calculated under the 512$\times$512 input image size and {\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\_$30W$\_$ALL. \subsection{Standard Training Configuration} We follow a widely used open-source toolbox, \emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows: $\bullet$ We initialize the backbone models (\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \cite{Deng2009ImageNet}. $\bullet$ We resize each image into 512 $\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training. $\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively. $\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs. $\bullet$ Testing time augmentation (\emph{i.e.,} flipping test or multi-scale testing) is not employed. \subsection{Benchmark Analysis} Table \ref{ben} shows the benchmark for the \emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency. In general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency. \begin{table*}[htbp] \renewcommand\tabcolsep{3.0pt} \begin{center} \caption{Benchmark of \emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.} \label{ben} \begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|} \hline Method&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\ \hline \emph{multi-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{Faster R-CNN \cite{Ren2015Faster}} &R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\ &R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\ &R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\ \hline \multirow{3}{*}{Cascade R-CNN \cite{Cai_2019}} &R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\ &R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\ &R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\ \hline \multirow{3}{*}{Grid R-CNN \cite{lu2019grid}} &R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\ &R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\ &R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\ \hline \multirow{3}{*}{RepPoints \cite{yang2019reppoints}} &R-18&20.11M&\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\ &R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\ &R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\ \hline \hline \emph{one-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{RetinaNet \cite{Lin2017Focal}} &R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\ &R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\ &R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\ \hline \multirow{3}{*}{FreeAnchor \cite{2019arXiv190902466Z}} &R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\ &R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\ &R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\ \hline \multirow{3}{*}{FoveaBox \cite{DBLP:journals/corr/abs-1904-03797}} &R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\ &R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\ &R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\ \hline \multirow{3}{*}{PAA \cite{2020arXiv200708103K}} &R-18&\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\ &R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\ &R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\ \hline \multirow{3}{*}{FSAF \cite{zhu2019feature}} &R-18&19.53M&38.88G&\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\ &R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\ &R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\ \hline \multirow{3}{*}{FCOS \cite{DBLP:journals/corr/abs-1904-01355}} &R-18&\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\ &R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\ &R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\ \hline \multirow{3}{*}{ATSS \cite{zhang2019bridging}} &R-18&\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\ &R-50&31.89M&51.55G&5.2&58.2&\bf 80.1&66.5&43.9&60.6&55.9&\bf 58.6&67.6&41.8&64.6\\ &R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\ \hline \multirow{3}{*}{GFL \cite{li2020generalized}} &R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\ &R-50&32.04M&52.35G&5.5&\bf 58.6&79.3&\bf 66.7&46.5&\bf 61.6&55.6&\bf 58.6&\bf 69.1&41.3&\bf 65.3\\ &R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\bf 56.3&57.0&\bf 69.1&\bf 43.0&64.0\\ \hline \end{tabular} \end{center} \end{table*} Therefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios. Consequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance. In order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch. \section{Conclusion} In this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Does DUO contain more instances per image than COCO? Only give me the answer and do not output any other words.
Answer:
[ "Yes, DUO has 9.57 instances per image while COCO contains 7.7." ]
276
6,992
single_doc_qa
04e923fc48f14aebb3ceecd57d889183
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 . Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined". The speaker uses “like" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored", we train our model to generate an ironic sentence such as “I love to be ignored". Although there is “love" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation. Recently, unsupervised style transfer becomes a very popular topic. Many state-of-the-art studies try to solve the task with sequence-to-sequence (seq2seq) framework. There are three main ways to build up models. The first is to learn a latent style-independent content representation and generate sentences with the content representation and another style BIBREF6 , BIBREF7 . The second is to directly transfer sentences from one style to another under the control of classifiers and reinforcement learning BIBREF8 . The third is to remove style attribute words from the input sentence and combine the remaining content with new style attribute words BIBREF9 , BIBREF10 . The first method usually obtains better performances via adversarial training with discriminators. The style-independent content representation, nevertheless, is not easily obtained BIBREF11 , which results in poor performances. The second method is suitable for complex styles which are difficult to model and describe. The model can learn the deep semantic features by itself but sometimes the model is sensitive to parameters and hard to train. The third method succeeds to preserve content but cannot work for some complex styles such as democratic and republican. Sentences with those styles usually do not have specific style attribute words. Unfortunately, due to the lack of large irony dataset and difficulties of modeling ironies, there has been little work trying to generate ironies based on seq2seq framework as far as we know. Inspired by methods for style transfer, we decide to implement a specifically designed model based on unsupervised style transfer to explore irony generation. In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation. Experimental results demonstrate that our model achieves a high irony accuracy with well-preserved sentiment and content. The contributions of our work are as follows: Related Work Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words. Some studies are trying to disentangle style representation from content representation. In BIBREF12 , authors leverage adversarial networks to learn separate content representations and style representations. In BIBREF13 and BIBREF6 , researchers combine variational auto-encoders (VAEs) with style discriminators. However, some recent studies BIBREF11 reveal that the disentanglement of content and style representations may not be achieved in practice. Therefore, some other research studies BIBREF9 , BIBREF10 strive to separate content and style by removing stylistic words. Nonetheless, many non-ironic sentences do not have specific stylistic words and as a result, we find it difficult to transfer non-ironic sentences to ironic sentences through this way in practice. Besides, some other research studies do not disentangle style from content but directly learn representations of sentences. In BIBREF8 , authors propose a dual reinforcement learning framework without separating content and style representations. In BIBREF7 , researchers utilize a machine translation model to learn a sentence representation preserving the meaning of the sentence but reducing stylistic properties. In this method, the quality of generated sentences relies on the performance of classifiers to a large extent. Meanwhile, such models are usually sensitive to parameters and difficult to train. In contrast, we combine a pre-training process with reinforcement learning to build up a stable language model and design special rewards for our task. Irony Detection: With the development of social media, irony detection becomes a more important task. Methods for irony detection can be mainly divided into two categories: methods based on feature engineering and methods based on neural networks. As for methods based on feature engineering, In BIBREF1 , authors investigate pragmatic phenomena and various irony markers. In BIBREF14 , researchers leverage a combination of sentiment, distributional semantic and text surface features. Those models rely on hand-crafted features and are hard to implement. When it comes to methods based on neural networks, long short-term memory (LSTM) BIBREF15 network is widely used and is very efficient for irony detection. In BIBREF16 , a tweet is divided into two segments and a subtract layer is implemented to calculate the difference between two segments in order to determine whether the tweet is ironic. In BIBREF17 , authors utilize a recurrent neural network with Bi-LSTM and self-attention without hand-crafted features. In BIBREF18 , researchers propose a system based on a densely connected LSTM network. Our Dataset In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all “ INLINEFORM0 money INLINEFORM1 " and “ INLINEFORM2 time INLINEFORM3 " tokens with “ INLINEFORM4 number INLINEFORM5 " token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing. As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences. [t] Irony Generation Algorithm INLINEFORM0 pre-train with auto-encoder Pre-train INLINEFORM1 , INLINEFORM2 with INLINEFORM3 using MLE based on Eq. EQREF16 Pre-train INLINEFORM4 , INLINEFORM5 with INLINEFORM6 using MLE based on Eq. EQREF17 INLINEFORM7 pre-train with back-translation Pre-train INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 with INLINEFORM12 using MLE based on Eq. EQREF19 Pre-train INLINEFORM13 , INLINEFORM14 , INLINEFORM15 , INLINEFORM16 with INLINEFORM17 using MLE based on Eq. EQREF20 INLINEFORM0 train with RL each epoch e = 1, 2, ..., INLINEFORM1 INLINEFORM2 train non-irony2irony with RL INLINEFORM3 in N INLINEFORM4 update INLINEFORM5 , INLINEFORM6 , using INLINEFORM7 based on Eq. EQREF29 INLINEFORM8 back-translation INLINEFORM9 INLINEFORM10 INLINEFORM11 update INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 using MLE based on Eq. EQREF19 INLINEFORM16 train irony2non-irony with RL INLINEFORM17 in I INLINEFORM18 update INLINEFORM19 , INLINEFORM20 , using INLINEFORM21 similar to Eq. EQREF29 INLINEFORM22 back-translation INLINEFORM23 INLINEFORM24 INLINEFORM25 update INLINEFORM26 , INLINEFORM27 , INLINEFORM28 , INLINEFORM29 using MLE based on Eq. EQREF20 Our Method Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 . Our irony generation algorithm is shown in Algorithm SECREF3 . We first pre-train our model using denoising auto-encoder and back-translation to build up language models for both styles (section SECREF14 ). Then we implement reinforcement learning to train the model to transfer sentences from one style to another (section SECREF21 ). Meanwhile, to achieve content preservation, we utilize back-translation for one time in every INLINEFORM0 time steps. Pretraining In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 In addition to denoising auto-encoder, we implement back-translation BIBREF19 to generate a pseudo-parallel corpus. Suppose our model takes non-ironic sentence INLINEFORM0 as input. We first encode INLINEFORM1 with INLINEFORM2 to obtain its latent representation INLINEFORM3 and decode the latent representation with INLINEFORM4 to get a transferred sentence INLINEFORM5 . Then we encode INLINEFORM6 with INLINEFORM7 and decode its latent representation with INLINEFORM8 to reconstruct the original input sentence INLINEFORM9 . Therefore, our reconstruction loss for back-translation INLINEFORM10 : DISPLAYFORM0 And if our model takes ironic sentence INLINEFORM0 as input, we can get the reconstruction loss for back-translation as: DISPLAYFORM0 Reinforcement Learning Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively. A pre-trained binary irony classifier based on CNN BIBREF20 is used to evaluate how ironic a sentence is. We denote the parameter of the classifier as INLINEFORM0 and it is fixed during the training process. In order to facilitate the transformation, we design the irony reward as the difference between the irony score of the input sentence and that of the output sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our irony reward is defined as: DISPLAYFORM0 where INLINEFORM0 denotes ironic style and INLINEFORM1 is the probability of that a sentence INLINEFORM2 is ironic. To preserve the sentiment polarity of the input sentence, we also need to use classifiers to evaluate the sentiment polarity of the sentences. However, the sentiment analysis of ironic sentences and non-ironic sentences are different. In the case of figurative languages such as irony, sarcasm or metaphor, the sentiment polarity of the literal meaning may differ significantly from that of the intended figurative meaning BIBREF0 . As we aim to train our model to transfer sentences from non-ironic to ironic, using only one classifier is not enough. As a result, we implement two pre-trained sentiment classifiers for non-ironic sentences and ironic sentences respectively. We denote the parameter of the sentiment classifier for ironic sentences as INLINEFORM0 and that of the sentiment classifier for non-ironic sentences as INLINEFORM1 . A challenge, when we implement two classifiers to evaluate the sentiment polarity, is that the two classifiers trained with different datasets may have different distributions of scores. That means we cannot directly calculate the sentiment reward with scores applied by two classifiers. To alleviate this problem and standardize the prediction results of two classifiers, we set a threshold for each classifier and subtract the respective threshold from scores applied by the classifier to obtain the comparative sentiment polarity score. We get the optimal threshold by maximizing the ability of the classifier according to the distribution of our training data. We denote the threshold of ironic sentiment classifier as INLINEFORM0 and the threshold of non-ironic sentiment classifier as INLINEFORM1 . The standardized sentiment score is defined as INLINEFORM2 and INLINEFORM3 where INLINEFORM4 denotes the positive sentiment polarity and INLINEFORM5 is the probability of that a sentence is positive in sentiment polarity. As mentioned above, the input sentence and the generated sentence should express the same sentiment. For example, if we input a non-ironic sentence “I hate to be ignored" which is negative in sentiment polarity, the generated ironic sentence should be also negative, such as “I love to be ignored". To achieve sentiment preservation, we design the sentiment reward as that one minus the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our sentiment reward is defined as: DISPLAYFORM0 To encourage our model to focus on both the irony accuracy and the sentiment preservation, we apply the harmonic mean of irony reward and sentiment reward: DISPLAYFORM0 Policy Gradient The policy gradient algorithm BIBREF21 is a simple but widely-used algorithm in reinforcement learning. It is used to maximize the expected reward INLINEFORM0 . The objective function to minimize is defined as: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 is the reward of INLINEFORM2 and INLINEFORM3 is the input size. Training Details INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 in our model are Transformers BIBREF22 with 4 layers and 2 shared layers. The word embeddings of 128 dimensions are learned during the training process. Our maximum sentence length is set as 40. The optimizer is Adam BIBREF23 and the learning rate is INLINEFORM4 . The batch size is 32 and harmonic weight INLINEFORM5 in Eq.9 is 0.5. We set the interval INLINEFORM6 as 200. The model is pre-trained for 6 epochs and trained for 15 epochs for reinforcement learning. Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 . Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony. Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony. Baselines We compare our model with the following state-of-art generative models: BackTrans BIBREF7 : In BIBREF7 , authors propose a model using machine translation in order to preserve the meaning of the sentence while reducing stylistic properties. Unpaired BIBREF10 : In BIBREF10 , researchers implement a method to remove emotional words and add desired sentiment controlled by reinforcement learning. CrossAlign BIBREF6 : In BIBREF6 , authors leverage refined alignment of latent representations to perform style transfer and a cross-aligned auto-encoder is implemented. CPTG BIBREF24 : An interpolated reconstruction loss is introduced in BIBREF24 and a discriminator is implemented to control attributes in this work. DualRL BIBREF8 : In BIBREF8 , researchers use two reinforcement rewards simultaneously to control style accuracy and content preservation. Evaluation Metrics In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated. We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is. Results and Discussions Table TABREF35 shows the automatic evaluation results of the models in the transformation from non-ironic sentences to ironic sentences. From the results, our model obtains the best result in sentiment delta. The DualRL model achieves the highest result in other metrics, but most of its outputs are the almost same as the input sentences. So it is reasonable that DualRL system outperforms ours in these metrics but it actually does not transfer the non-ironic sentences to ironic sentences at all. From this perspective, we cannot view DualRL as an effective model for irony generation. In contrast, our model gets results close to those of DualRL and obtains a balance between irony accuracy, sentiment preservation, and content preservation if we also consider the irony accuracy discussed below. And from human evaluation results shown in Table TABREF36 , our model gets the best average rank in irony accuracy. And as mentioned above, the DualRL model usually does not change the input sentence and outputs the same sentence. Therefore, it is reasonable that it obtains the best rank in sentiment and content preservation and ours is the second. However, it still demonstrates that our model, instead of changing nothing, transfers the style of the input sentence with content and sentiment preservation at the same time. Case Study In the section, we present some example outputs of different models. Table TABREF37 shows the results of the transformation from non-ironic sentences to ironic sentences. We can observe that: (1) The BackTrans system, the Unpaired system, the CrossAlign system and the CPTG system tends to generate sentences which are towards irony but do not preserve content. (2) The DualRL system preserves content and sentiment very well but even does not change the input sentence. (3) Our model considers both aspects and achieves a better balance among irony accuracy, sentiment and content preservation. Error Analysis Although our model outperforms other style transfer baselines according to automatic and human evaluation results, there are still some failure cases because irony generation is still a very challenging task. We would like to share the issues we meet during our experiments and our solutions to some of them in this section. No Change: As mentioned above, many style transfer models, such as DualRL, tend to make few changes to the input sentence and output the same sentence. Actually, this is a common issue for unsupervised style transfer systems and we also meet it during our experiments. The main reason for the issue is that rewards for content preservation are too prominent and rewards for style accuracy cannot work well. In contrast, in order to guarantee the readability and fluency of the output sentence, we also cannot emphasize too much on rewards for style accuracy because it may cause some other issues such as word repetition mentioned below. A method to solve the problem is tuning hyperparameters and this is also the method we implement in this work. As for content preservation, maybe MLE methods such as back-translation are not enough because they tend to force models to generate specific words. In the future, we should further design some more suitable methods to control content preservation for models without disentangling style and content representations, such as DualRL and ours. Word Repetition: During our experiments, we observe that some of the outputs prefer to repeat the same word as shown in Table TABREF38 . This is because reinforcement learning rewards encourage the model to generate words which can get high scores from classifiers and even back-translation cannot stop it. Our solution is that we can lower the probability of decoding a word in decoders if the word has been generated in the previous time steps during testing. We also try to implement this method during training time but obtain worse performances because it may limit the effects of training. Some previous studies utilize language models to control the fluency of the output sentence and we also try this method. Nonetheless, pre-training a language model with tweets and using it to generate rewards is difficult because tweets are more casual and have more noise. Rewards from that kind of language model are usually not accurate and may confuse the model. In the future, we should come up with better methods to model language fluency with the consideration of irony accuracy, sentiment and content preservation, especially for tweets. Improper Words: As ironic style is hard for our model to learn, it may generate some improper words which make the sentence strange. As the example shown in the Table TABREF38 , the sentiment word in the input sentence is “wonderful" and the model should change it into a negative word such as “sad" to make the output sentence ironic. However, the model changes “friday" and “fifa" which are not related to ironic styles. We have not found a very effective method to address this issue and maybe we should further explore stronger models to learn ironic styles better. Additional Experiments In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences. As shown in Table TABREF46 , we also conduct automatic evaluations and the conclusions are similar to those of the transformation from non-ironic sentences to ironic sentences. As for human evaluation results in Table TABREF47 , our model still can achieve the second-best results in sentiment and content preservation. Nevertheless, DualRL system and ours get poor performances in irony accuracy. The reason may be that the other four baselines tend to generate common and even not fluent sentences which are irrelevant to the input sentences and are hard to be identified as ironies. So annotators usually mark these output sentences as non-ironic sentences, which causes these models to obtain better performances than DualRL and ours but much poorer results in sentiment and content preservation. Some examples are shown in Table TABREF52 . Conclusion and Future Work In this paper, we first systematically define irony generation based on style transfer. Because of the lack of irony data, we make use of twitter and build a large-scale dataset. In order to control irony accuracy, sentiment preservation and content preservation at the same time, we also design a combination of rewards for reinforcement learning and incorporate reinforcement learning with a pre-training process. Experimental results demonstrate that our model outperforms other generative models and our rewards are effective. Although our model design is effective, there are still many errors and we systematically analyze them. In the future, we are interested in exploring these directions and our work may extend to other kinds of ironies which are more difficult to model. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What experiments are conducted? Only give me the answer and do not output any other words.
Answer:
[ "Irony Classifier, Sentiment Classifier for Irony, Sentiment Classifier for Non-irony, transformation from ironic sentences to non-ironic sentences" ]
276
5,932
single_doc_qa
a4845075338e4b5ab5122aabbcc5abab
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction In the age of information dissemination without quality control, it has enabled malicious users to spread misinformation via social media and aim individual users with propaganda campaigns to achieve political and financial gains as well as advance a specific agenda. Often disinformation is complied in the two major forms: fake news and propaganda, where they differ in the sense that the propaganda is possibly built upon true information (e.g., biased, loaded language, repetition, etc.). Prior works BIBREF0, BIBREF1, BIBREF2 in detecting propaganda have focused primarily at document level, typically labeling all articles from a propagandistic news outlet as propaganda and thus, often non-propagandistic articles from the outlet are mislabeled. To this end, EMNLP19DaSanMartino focuses on analyzing the use of propaganda and detecting specific propagandistic techniques in news articles at sentence and fragment level, respectively and thus, promotes explainable AI. For instance, the following text is a propaganda of type `slogan'. Trump tweeted: $\underbrace{\text{`}`{\texttt {BUILD THE WALL!}"}}_{\text{slogan}}$ Shared Task: This work addresses the two tasks in propaganda detection BIBREF3 of different granularities: (1) Sentence-level Classification (SLC), a binary classification that predicts whether a sentence contains at least one propaganda technique, and (2) Fragment-level Classification (FLC), a token-level (multi-label) classification that identifies both the spans and the type of propaganda technique(s). Contributions: (1) To address SLC, we design an ensemble of different classifiers based on Logistic Regression, CNN and BERT, and leverage transfer learning benefits using the pre-trained embeddings/models from FastText and BERT. We also employed different features such as linguistic (sentiment, readability, emotion, part-of-speech and named entity tags, etc.), layout, topics, etc. (2) To address FLC, we design a multi-task neural sequence tagger based on LSTM-CRF and linguistic features to jointly detect propagandistic fragments and its type. Moreover, we investigate performing FLC and SLC jointly in a multi-granularity network based on LSTM-CRF and BERT. (3) Our system (MIC-CIS) is ranked 3rd (out of 12 participants) and 4th (out of 25 participants) in FLC and SLC tasks, respectively. System Description ::: Linguistic, Layout and Topical Features Some of the propaganda techniques BIBREF3 involve word and phrases that express strong emotional implications, exaggeration, minimization, doubt, national feeling, labeling , stereotyping, etc. This inspires us in extracting different features (Table TABREF1) including the complexity of text, sentiment, emotion, lexical (POS, NER, etc.), layout, etc. To further investigate, we use topical features (e.g., document-topic proportion) BIBREF4, BIBREF5, BIBREF6 at sentence and document levels in order to determine irrelevant themes, if introduced to the issue being discussed (e.g., Red Herring). For word and sentence representations, we use pre-trained vectors from FastText BIBREF7 and BERT BIBREF8. System Description ::: Sentence-level Propaganda Detection Figure FIGREF2 (left) describes the three components of our system for SLC task: features, classifiers and ensemble. The arrows from features-to-classifier indicate that we investigate linguistic, layout and topical features in the two binary classifiers: LogisticRegression and CNN. For CNN, we follow the architecture of DBLP:conf/emnlp/Kim14 for sentence-level classification, initializing the word vectors by FastText or BERT. We concatenate features in the last hidden layer before classification. One of our strong classifiers includes BERT that has achieved state-of-the-art performance on multiple NLP benchmarks. Following DBLP:conf/naacl/DevlinCLT19, we fine-tune BERT for binary classification, initializing with a pre-trained model (i.e., BERT-base, Cased). Additionally, we apply a decision function such that a sentence is tagged as propaganda if prediction probability of the classifier is greater than a threshold ($\tau $). We relax the binary decision boundary to boost recall, similar to pankajgupta:CrossRE2019. Ensemble of Logistic Regression, CNN and BERT: In the final component, we collect predictions (i.e., propaganda label) for each sentence from the three ($\mathcal {M}=3$) classifiers and thus, obtain $\mathcal {M}$ number of predictions for each sentence. We explore two ensemble strategies (Table TABREF1): majority-voting and relax-voting to boost precision and recall, respectively. System Description ::: Fragment-level Propaganda Detection Figure FIGREF2 (right) describes our system for FLC task, where we design sequence taggers BIBREF9, BIBREF10 in three modes: (1) LSTM-CRF BIBREF11 with word embeddings ($w\_e$) and character embeddings $c\_e$, token-level features ($t\_f$) such as polarity, POS, NER, etc. (2) LSTM-CRF+Multi-grain that jointly performs FLC and SLC with FastTextWordEmb and BERTSentEmb, respectively. Here, we add binary sentence classification loss to sequence tagging weighted by a factor of $\alpha $. (3) LSTM-CRF+Multi-task that performs propagandistic span/fragment detection (PFD) and FLC (fragment detection + 19-way classification). Ensemble of Multi-grain, Multi-task LSTM-CRF with BERT: Here, we build an ensemble by considering propagandistic fragments (and its type) from each of the sequence taggers. In doing so, we first perform majority voting at the fragment level for the fragment where their spans exactly overlap. In case of non-overlapping fragments, we consider all. However, when the spans overlap (though with the same label), we consider the fragment with the largest span. Experiments and Evaluation Data: While the SLC task is binary, the FLC consists of 18 propaganda techniques BIBREF3. We split (80-20%) the annotated corpus into 5-folds and 3-folds for SLC and FLC tasks, respectively. The development set of each the folds is represented by dev (internal); however, the un-annotated corpus used in leaderboard comparisons by dev (external). We remove empty and single token sentences after tokenization. Experimental Setup: We use PyTorch framework for the pre-trained BERT model (Bert-base-cased), fine-tuned for SLC task. In the multi-granularity loss, we set $\alpha = 0.1$ for sentence classification based on dev (internal, fold1) scores. We use BIO tagging scheme of NER in FLC task. For CNN, we follow DBLP:conf/emnlp/Kim14 with filter-sizes of [2, 3, 4, 5, 6], 128 filters and 16 batch-size. We compute binary-F1and macro-F1 BIBREF12 in SLC and FLC, respectively on dev (internal). Experiments and Evaluation ::: Results: Sentence-Level Propaganda Table TABREF10 shows the scores on dev (internal and external) for SLC task. Observe that the pre-trained embeddings (FastText or BERT) outperform TF-IDF vector representation. In row r2, we apply logistic regression classifier with BERTSentEmb that leads to improved scores over FastTextSentEmb. Subsequently, we augment the sentence vector with additional features that improves F1 on dev (external), however not dev (internal). Next, we initialize CNN by FastTextWordEmb or BERTWordEmb and augment the last hidden layer (before classification) with BERTSentEmb and feature vectors, leading to gains in F1 for both the dev sets. Further, we fine-tune BERT and apply different thresholds in relaxing the decision boundary, where $\tau \ge 0.35$ is found optimal. We choose the three different models in the ensemble: Logistic Regression, CNN and BERT on fold1 and subsequently an ensemble+ of r3, r6 and r12 from each fold1-5 (i.e., 15 models) to obtain predictions for dev (external). We investigate different ensemble schemes (r17-r19), where we observe that the relax-voting improves recall and therefore, the higher F1 (i.e., 0.673). In postprocess step, we check for repetition propaganda technique by computing cosine similarity between the current sentence and its preceding $w=10$ sentence vectors (i.e., BERTSentEmb) in the document. If the cosine-similarity is greater than $\lambda \in \lbrace .99, .95\rbrace $, then the current sentence is labeled as propaganda due to repetition. Comparing r19 and r21, we observe a gain in recall, however an overall decrease in F1 applying postprocess. Finally, we use the configuration of r19 on the test set. The ensemble+ of (r4, r7 r12) was analyzed after test submission. Table TABREF9 (SLC) shows that our submission is ranked at 4th position. Experiments and Evaluation ::: Results: Fragment-Level Propaganda Table TABREF11 shows the scores on dev (internal and external) for FLC task. Observe that the features (i.e., polarity, POS and NER in row II) when introduced in LSTM-CRF improves F1. We run multi-grained LSTM-CRF without BERTSentEmb (i.e., row III) and with it (i.e., row IV), where the latter improves scores on dev (internal), however not on dev (external). Finally, we perform multi-tasking with another auxiliary task of PFD. Given the scores on dev (internal and external) using different configurations (rows I-V), it is difficult to infer the optimal configuration. Thus, we choose the two best configurations (II and IV) on dev (internal) set and build an ensemble+ of predictions (discussed in section SECREF6), leading to a boost in recall and thus an improved F1 on dev (external). Finally, we use the ensemble+ of (II and IV) from each of the folds 1-3, i.e., $|{\mathcal {M}}|=6$ models to obtain predictions on test. Table TABREF9 (FLC) shows that our submission is ranked at 3rd position. Conclusion and Future Work Our system (Team: MIC-CIS) explores different neural architectures (CNN, BERT and LSTM-CRF) with linguistic, layout and topical features to address the tasks of fine-grained propaganda detection. We have demonstrated gains in performance due to the features, ensemble schemes, multi-tasking and multi-granularity architectures. Compared to the other participating systems, our submissions are ranked 3rd and 4th in FLC and SLC tasks, respectively. In future, we would like to enrich BERT models with linguistic, layout and topical features during their fine-tuning. Further, we would also be interested in understanding and analyzing the neural network learning, i.e., extracting salient fragments (or key-phrases) in the sentence that generate propaganda, similar to pankajgupta:2018LISA in order to promote explainable AI. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which basic neural architecture perform best by itself? Only give me the answer and do not output any other words.
Answer:
[ "BERT" ]
276
2,382
single_doc_qa
5d119583c9cb4703ad87d6a836aa0300
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises. Biography Before her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first "e-book" mid-July. Margaret died on the 10th of August 2022 in Cleveland, Queensland. Bibliography Single Novels King Country (1970) Blaze of Silk (1970) The Time of the Jacaranda (1970) Bauhinia Junction (1971) Man from Bahl Bahla (1971) Summer Magic (1971) Return to Belle Amber (1971) Ring of Jade (1972) Copper Moon (1972) Rainbow Bird (1972) Man Like Daintree (1972) Noonfire (1972) Storm Over Mandargi (1973) Wind River (1973) Love Theme (1974) McCabe's Kingdom (1974) Sweet Sundown (1974) Reeds of Honey (1975) Storm Flower (1975) Lesson in Loving (1975) Flight into Yesterday (1976) Red Cliffs of Malpara (1976) Man on Half-moon (1976) Swan's Reach (1976) Mutiny in Paradise (1977) One Way Ticket (1977) Portrait of Jaime (1977) Black Ingo (1977) Awakening Flame (1978) Wild Swan (1978) Ring of Fire (1978) Wake the Sleeping Tiger (1978) Valley of the Moon (1979) White Magnolia (1979) Winds of Heaven (1979) Blue Lotus (1979) Butterfly and the Baron (1979) Golden Puma (1980) Temple of Fire (1980) Lord of the High Valley (1980) Flamingo Park (1980) North of Capricorn (1981) Season for Change (1981) Shadow Dance (1981) McIvor Affair (1981) Home to Morning Star (1981) Broken Rhapsody (1982) The Silver Veil (1982) Spellbound (1982) Hunter's Moon (1982) Girl at Cobalt Creek (1983) No Alternative (1983) House of Memories (1983) Almost a Stranger (1984) A place called Rambulara (1984) Fallen Idol (1984) Hunt the Sun (1985) Eagle's Ridge (1985) The Tiger's Cage (1986) Innocent in Eden (1986) Diamond Valley (1986) Morning Glory (1988) Devil Moon (1988) Mowana Magic (1988) Hungry Heart (1988) Rise of an Eagle (1988) One Fateful Summer (1993) The Carradine Brand (1994) Holding on to Alex (1997) The Australian Heiress (1997) Claiming His Child (1999) The Cattleman's Bride (2000) The Cattle Baron (2001) The Husbands of the Outback (2001) Secrets of the Outback (2002) With This Ring (2003) Innocent Mistress (2004) Cattle Rancher, Convenient Wife (2007) Outback Marriages (2007) Promoted: Nanny to Wife (2007) Cattle Rancher, Secret Son (2007) Genni's Dilemma (2008) Bride At Briar Ridge (2009) Outback Heiress, Surprise Proposal (2009) Cattle Baron, Nanny Needed (2009) Legends of the Outback Series Mail Order Marriage (1999) The Bridesmaid's Wedding (2000) The English Bride (2000) A Wife at Kimbara (2000) Koomera Crossing Series Sarah's Baby (2003) Runaway Wife (2003) Outback Bridegroom (2003) Outback Surrender (2003) Home to Eden (2004) McIvor Sisters Series The Outback Engagement (2005) Marriage at Murraree (2005) Men Of The Outback Series The Cattleman (2006) The Cattle Baron's Bride (2006) Her Outback Protector (2006) The Horseman (2006) Outback Marriages Series Outback Man Seeks Wife (2007) Cattle Rancher, Convenient Wife (2007) Barons of the Outback Series Multi-Author Wedding At Wangaree Valley (2008) Bride At Briar's Ridge (2008) Family Ties Multi-Author Once Burned (1995) Hitched! Multi-Author A Faulkner Possession (1996) Simply the Best Multi-Author Georgia and the Tycoon (1997) The Big Event Multi-Author Beresford's Bride (1998) Guardian Angels Multi-Author Gabriel's Mission (1998) Australians Series Multi-Author 7. Her Outback Man (1998) 17. Master of Maramba (2001) 19. Outback Fire (2001) 22. Mistaken Mistress (2002) 24. Outback Angel (2002) 33. The Australian Tycoon's Proposal (2004) 35. His Heiress Wife (2004) Marrying the Boss Series Multi-Author Boardroom Proposal (1999) Contract Brides Series Multi-Author Strategy for Marriage (2002) Everlasting Love Series Multi-Author Hidden Legacy (2008) Diamond Brides Series Multi-Author The Australian's Society Bride (2008) Collections Summer Magic / Ring of Jade / Noonfire (1981) Wife at Kimbara / Bridesmaid's Wedding (2005) Omnibus in Collaboration Pretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm) Dear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham) The Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid) The Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton) Winds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton) Moorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter) The Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe) Head of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith) Heart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf) One Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks) Marry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister) Husbands on Horseback (1996) (with Diana Palmer) Wedlocked (1999) (with Day Leclaire and Anne McAllister) Mistletoe Magic (1999) (with Betty Neels and Rebecca Winters) The Australians (2000) (with Helen Bianchin and Miranda Lee) Weddings Down Under (2001) (with Helen Bianchin and Jessica Hart) Outback Husbands (2002) (with Marion Lennox) The Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann) Australian Nights (2003) (with Miranda Lee) Outback Weddings (2003) (with Barbara Hannay) Australian Playboys (2003) (with Helen Bianchin and Marion Lennox) Australian Tycoons (2004) (with Emma Darcy and Marion Lennox) A Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe) White Wedding (2004) (with Judy Christenberry and Jessica Steele) A Christmas Engagement (2004) (with Sara Craven and Jessica Matthews) A Very Special Mother's Day (2005) (with Anne Herries) All I Want for Christmas... (2005) (with Betty Neels and Jessica Steele) The Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan) Outback Desire (2006) (with Emma Darcy and Carol Marinelli) To Mum, with Love (2006) (with Rebecca Winters) Australian Heroes (2007) (with Marion Lennox and Fiona McArthur) Tall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin) The Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer) Island Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver) Australian Billionaires (2009) (with Jennie Adams and Amy Andrews) Cattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas) External links Margaret Way at Harlequin Enterprises Ltd Australian romantic fiction writers Australian women novelists Living people Year of birth missing (living people) Women romantic fiction writers Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Where was Margaret Way born and where did she die? Only give me the answer and do not output any other words.
Answer:
[ "Margaret Way was born in Brisbane and died in Cleveland, Queensland, Australia." ]
276
2,167
single_doc_qa
f55e3c4f46fc4811bb30b7da1cc1d427
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Named Entity Recognition (NER) is a foremost NLP task to label each atomic elements of a sentence into specific categories like "PERSON", "LOCATION", "ORGANIZATION" and othersBIBREF0. There has been an extensive NER research on English, German, Dutch and Spanish language BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, and notable research on low resource South Asian languages like HindiBIBREF6, IndonesianBIBREF7 and other Indian languages (Kannada, Malayalam, Tamil and Telugu)BIBREF8. However, there has been no study on developing neural NER for Nepali language. In this paper, we propose a neural based Nepali NER using latest state-of-the-art architecture based on grapheme-level which doesn't require any hand-crafted features and no data pre-processing. Recent neural architecture like BIBREF1 is used to relax the need to hand-craft the features and need to use part-of-speech tag to determine the category of the entity. However, this architecture have been studied for languages like English, and German and not been applied to languages like Nepali which is a low resource language i.e limited data set to train the model. Traditional methods like Hidden Markov Model (HMM) with rule based approachesBIBREF9,BIBREF10, and Support Vector Machine (SVM) with manual feature-engineeringBIBREF11 have been applied but they perform poor compared to neural. However, there has been no research in Nepali NER using neural network. Therefore, we created the named entity annotated dataset partly with the help of Dataturk to train a neural model. The texts used for this dataset are collected from various daily news sources from Nepal around the year 2015-2016. Following are our contributions: We present a novel Named Entity Recognizer (NER) for Nepali language. To best of our knowledge we are the first to propose neural based Nepali NER. As there are not good quality dataset to train NER we release a dataset to support future research We perform empirical evaluation of our model with state-of-the-art models with relative improvement of upto 10% In this paper, we present works similar to ours in Section SECREF2. We describe our approach and dataset statistics in Section SECREF3 and SECREF4, followed by our experiments, evaluation and discussion in Section SECREF5, SECREF6, and SECREF7. We conclude with our observations in Section SECREF8. To facilitate further research our code and dataset will be made available at github.com/link-yet-to-be-updated Related Work There has been a handful of research on Nepali NER task based on approaches like Support Vector Machine and gazetteer listBIBREF11 and Hidden Markov Model and gazetteer listBIBREF9,BIBREF10. BIBREF11 uses SVM along with features like first word, word length, digit features and gazetteer (person, organization, location, middle name, verb, designation and others). It uses one vs rest classification model to classify each word into different entity classes. However, it does not the take context word into account while training the model. Similarly, BIBREF9 and BIBREF10 uses Hidden Markov Model with n-gram technique for extracting POS-tags. POS-tags with common noun, proper noun or combination of both are combined together, then uses gazetteer list as look-up table to identify the named entities. Researchers have shown that the neural networks like CNNBIBREF12, RNNBIBREF13, LSTMBIBREF14, GRUBIBREF15 can capture the semantic knowledge of language better with the help of pre-trained embbeddings like word2vecBIBREF16, gloveBIBREF17 or fasttextBIBREF18. Similar approaches has been applied to many South Asian languages like HindiBIBREF6, IndonesianBIBREF7, BengaliBIBREF19 and In this paper, we present the neural network architecture for NER task in Nepali language, which doesn't require any manual feature engineering nor any data pre-processing during training. First we are comparing BiLSTMBIBREF14, BiLSTM+CNNBIBREF20, BiLSTM+CRFBIBREF1, BiLSTM+CNN+CRFBIBREF2 models with CNN modelBIBREF0 and Stanford CRF modelBIBREF21. Secondly, we show the comparison between models trained on general word embeddings, word embedding + character-level embedding, word embedding + part-of-speech(POS) one-hot encoding and word embedding + grapheme clustered or sub-word embeddingBIBREF22. The experiments were performed on the dataset that we created and on the dataset received from ILPRL lab. Our extensive study shows that augmenting word embedding with character or grapheme-level representation and POS one-hot encoding vector yields better results compared to using general word embedding alone. Approach In this section, we describe our approach in building our model. This model is partly inspired from multiple models BIBREF20,BIBREF1, andBIBREF2 Approach ::: Bidirectional LSTM We used Bi-directional LSTM to capture the word representation in forward as well as reverse direction of a sentence. Generally, LSTMs take inputs from left (past) of the sentence and computes the hidden state. However, it is proven beneficialBIBREF23 to use bi-directional LSTM, where, hidden states are computed based from right (future) of sentence and both of these hidden states are concatenated to produce the final output as $h_t$=[$\overrightarrow{h_t}$;$\overleftarrow{h_t}$], where $\overrightarrow{h_t}$, $\overleftarrow{h_t}$ = hidden state computed in forward and backward direction respectively. Approach ::: Features ::: Word embeddings We have used Word2Vec BIBREF16, GloVe BIBREF17 and FastText BIBREF18 word vectors of 300 dimensions. These vectors were trained on the corpus obtained from Nepali National Corpus. This pre-lemmatized corpus consists of 14 million words from books, web-texts and news papers. This corpus was mixed with the texts from the dataset before training CBOW and skip-gram version of word2vec using gensim libraryBIBREF24. This trained model consists of vectors for 72782 unique words. Light pre-processing was performed on the corpus before training it. For example, invalid characters or characters other than Devanagari were removed but punctuation and numbers were not removed. We set the window context at 10 and the rare words whose count is below 5 are dropped. These word embeddings were not frozen during the training session because fine-tuning word embedding help achieve better performance compared to frozen oneBIBREF20. We have used fasttext embeddings in particular because of its sub-word representation ability, which is very useful in highly inflectional language as shown in Table TABREF25. We have trained the word embedding in such a way that the sub-word size remains between 1 and 4. We particularly chose this size because in Nepali language a single letter can also be a word, for example e, t, C, r, l, n, u and a single character (grapheme) or sub-word can be formed after mixture of dependent vowel signs with consonant letters for example, C + O + = CO, here three different consonant letters form a single sub-word. The two-dimensional visualization of an example word npAl is shown in FIGREF14. Principal Component Analysis (PCA) technique was used to generate this visualization which helps use to analyze the nearest neighbor words of a given sample word. 84 and 104 nearest neighbors were observed using word2vec and fasttext embedding respectively on the same corpus. Approach ::: Features ::: Character-level embeddings BIBREF20 and BIBREF2 successfully presented that the character-level embeddings, extracted using CNN, when combined with word embeddings enhances the NER model performance significantly, as it is able to capture morphological features of a word. Figure FIGREF7 shows the grapheme-level CNN used in our model, where inputs to CNN are graphemes. Character-level CNN is also built in similar fashion, except the inputs are characters. Grapheme or Character -level embeddings are randomly initialized from [0,1] with real values with uniform distribution of dimension 30. Approach ::: Features ::: Grapheme-level embeddings Grapheme is atomic meaningful unit in writing system of any languages. Since, Nepali language is highly morphologically inflectional, we compared grapheme-level representation with character-level representation to evaluate its effect. For example, in character-level embedding, each character of a word npAl results into n + + p + A + l has its own embedding. However, in grapheme level, a word npAl is clustered into graphemes, resulting into n + pA + l. Here, each grapheme has its own embedding. This grapheme-level embedding results good scores on par with character-level embedding in highly inflectional languages like Nepali, because graphemes also capture syntactic information similar to characters. We created grapheme clusters using uniseg package which is helpful in unicode text segmentations. Approach ::: Features ::: Part-of-speech (POS) one hot encoding We created one-hot encoded vector of POS tags and then concatenated with pre-trained word embeddings before passing it to BiLSTM network. A sample of data is shown in figure FIGREF13. Dataset Statistics ::: OurNepali dataset Since, we there was no publicly available standard Nepali NER dataset and did not receive any dataset from the previous researchers, we had to create our own dataset. This dataset contains the sentences collected from daily newspaper of the year 2015-2016. This dataset has three major classes Person (PER), Location (LOC) and Organization (ORG). Pre-processing was performed on the text before creation of the dataset, for example all punctuations and numbers besides ',', '-', '|' and '.' were removed. Currently, the dataset is in standard CoNLL-2003 IO formatBIBREF25. Since, this dataset is not lemmatized originally, we lemmatized only the post-positions like Ek, kO, l, mA, m, my, jF, sg, aEG which are just the few examples among 299 post positions in Nepali language. We obtained these post-positions from sanjaalcorps and added few more to match our dataset. We will be releasing this list in our github repository. We found out that lemmatizing the post-positions boosted the F1 score by almost 10%. In order to label our dataset with POS-tags, we first created POS annotated dataset of 6946 sentences and 16225 unique words extracted from POS-tagged Nepali National Corpus and trained a BiLSTM model with 95.14% accuracy which was used to create POS-tags for our dataset. The dataset released in our github repository contains each word in newline with space separated POS-tags and Entity-tags. The sentences are separated by empty newline. A sample sentence from the dataset is presented in table FIGREF13. Dataset Statistics ::: ILPRL dataset After much time, we received the dataset from Bal Krishna Bal, ILPRL, KU. This dataset follows standard CoNLL-2003 IOB formatBIBREF25 with POS tags. This dataset is prepared by ILPRL Lab, KU and KEIV Technologies. Few corrections like correcting the NER tags had to be made on the dataset. The statistics of both the dataset is presented in table TABREF23. Table TABREF24 presents the total entities (PER, LOC, ORG and MISC) from both of the dataset used in our experiments. The dataset is divided into three parts with 64%, 16% and 20% of the total dataset into training set, development set and test set respectively. Experiments In this section, we present the details about training our neural network. The neural network architecture are implemented using PyTorch framework BIBREF26. The training is performed on a single Nvidia Tesla P100 SXM2. We first run our experiment on BiLSTM, BiLSTM-CNN, BiLSTM-CRF BiLSTM-CNN-CRF using the hyper-parameters mentioned in Table TABREF30. The training and evaluation was done on sentence-level. The RNN variants are initialized randomly from $(-\sqrt{k},\sqrt{k})$ where $k=\frac{1}{hidden\_size}$. First we loaded our dataset and built vocabulary using torchtext library. This eased our process of data loading using its SequenceTaggingDataset class. We trained our model with shuffled training set using Adam optimizer with hyper-parameters mentioned in table TABREF30. All our models were trained on single layer of LSTM network. We found out Adam was giving better performance and faster convergence compared to Stochastic Gradient Descent (SGD). We chose those hyper-parameters after many ablation studies. The dropout of 0.5 is applied after LSTM layer. For CNN, we used 30 different filters of sizes 3, 4 and 5. The embeddings of each character or grapheme involved in a given word, were passed through the pipeline of Convolution, Rectified Linear Unit and Max-Pooling. The resulting vectors were concatenated and applied dropout of 0.5 before passing into linear layer to obtain the embedding size of 30 for the given word. This resulting embedding is concatenated with word embeddings, which is again concatenated with one-hot POS vector. Experiments ::: Tagging Scheme Currently, for our experiments we trained our model on IO (Inside, Outside) format for both the dataset, hence the dataset does not contain any B-type annotation unlike in BIO (Beginning, Inside, Outside) scheme. Experiments ::: Early Stopping We used simple early stopping technique where if the validation loss does not decrease after 10 epochs, the training was stopped, else the training will run upto 100 epochs. In our experience, training usually stops around 30-50 epochs. Experiments ::: Hyper-parameters Tuning We ran our experiment looking for the best hyper-parameters by changing learning rate from (0,1, 0.01, 0.001, 0.0001), weight decay from [$10^{-1}$, $10^{-2}$, $10^{-3}$, $10^{-4}$, $10^{-5}$, $10^{-6}$, $10^{-7}$], batch size from [1, 2, 4, 8, 16, 32, 64, 128], hidden size from [8, 16, 32, 64, 128, 256, 512 1024]. Table TABREF30 shows all other hyper-parameter used in our experiment for both of the dataset. Experiments ::: Effect of Dropout Figure FIGREF31 shows how we end up choosing 0.5 as dropout rate. When the dropout layer was not used, the F1 score are at the lowest. As, we slowly increase the dropout rate, the F1 score also gradually increases, however after dropout rate = 0.5, the F1 score starts falling down. Therefore, we have chosen 0.5 as dropout rate for all other experiments performed. Evaluation In this section, we present the details regarding evaluation and comparison of our models with other baselines. Table TABREF25 shows the study of various embeddings and comparison among each other in OurNepali dataset. Here, raw dataset represents such dataset where post-positions are not lemmatized. We can observe that pre-trained embeddings significantly improves the score compared to randomly initialized embedding. We can deduce that Skip Gram models perform better compared CBOW models for word2vec and fasttext. Here, fastText_Pretrained represents the embedding readily available in fastText website, while other embeddings are trained on the Nepali National Corpus as mentioned in sub-section SECREF11. From this table TABREF25, we can clearly observe that model using fastText_Skip Gram embeddings outperforms all other models. Table TABREF35 shows the model architecture comparison between all the models experimented. The features used for Stanford CRF classifier are words, letter n-grams of upto length 6, previous word and next word. This model is trained till the current function value is less than $1\mathrm {e}{-2}$. The hyper-parameters of neural network experiments are set as shown in table TABREF30. Since, word embedding of character-level and grapheme-level is random, their scores are near. All models are evaluated using CoNLL-2003 evaluation scriptBIBREF25 to calculate entity-wise precision, recall and f1 score. Discussion In this paper we present that we can exploit the power of neural network to train the model to perform downstream NLP tasks like Named Entity Recognition even in Nepali language. We showed that the word vectors learned through fasttext skip gram model performs better than other word embedding because of its capability to represent sub-word and this is particularly important to capture morphological structure of words and sentences in highly inflectional like Nepali. This concept can come handy in other Devanagari languages as well because the written scripts have similar syntactical structure. We also found out that stemming post-positions can help a lot in improving model performance because of inflectional characteristics of Nepali language. So when we separate out its inflections or morphemes, we can minimize the variations of same word which gives its root word a stronger word vector representations compared to its inflected versions. We can clearly imply from tables TABREF23, TABREF24, and TABREF35 that we need more data to get better results because OurNepali dataset volume is almost ten times bigger compared to ILPRL dataset in terms of entities. Conclusion and Future work In this paper, we proposed a novel NER for Nepali language and achieved relative improvement of upto 10% and studies different factors effecting the performance of the NER for Nepali language. We also present a neural architecture BiLSTM+CNN(grapheme-level) which turns out to be performing on par with BiLSTM+CNN(character-level) under the same configuration. We believe this will not only help Nepali language but also other languages falling under the umbrellas of Devanagari languages. Our model BiLSTM+CNN(grapheme-level) and BiLSTM+CNN(G)+POS outperforms all other model experimented in OurNepali and ILPRL dataset respectively. Since this is the first named entity recognition research in Nepal language using neural network, there are many rooms for improvement. We believe initializing the grapheme-level embedding with fasttext embeddings might help boosting the performance, rather than randomly initializing it. In future, we plan to apply other latest techniques like BERT, ELMo and FLAIR to study its effect on low-resource language like Nepali. We also plan to improve the model using cross-lingual or multi-lingual parameter sharing techniques by jointly training with other Devanagari languages like Hindi and Bengali. Finally, we would like to contribute our dataset to Nepali NLP community to move forward the research going on in language understanding domain. We believe there should be special committee to create and maintain such dataset for Nepali NLP and organize various competitions which would elevate the NLP research in Nepal. Some of the future works are listed below: Proper initialization of grapheme level embedding from fasttext embeddings. Apply robust POS-tagger for Nepali dataset Lemmatize the OurNepali dataset with robust and efficient lemmatizer Improve Nepali language score with cross-lingual learning techniques Create more dataset using Wikipedia/Wikidata framework Acknowledgments The authors of this paper would like to express sincere thanks to Bal Krishna Bal, Kathmandu University Professor for providing us the POS-tagged Nepali NER data. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How many different types of entities exist in the dataset? Only give me the answer and do not output any other words.
Answer:
[ "OurNepali contains 3 different types of entities, ILPRL contains 4 different types of entities", "three" ]
276
4,143
single_doc_qa
8ec0dcee39204a20990353aabcbc3336
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: 2015-05-14 Assigned to ROVI GUIDES, INC. reassignment ROVI GUIDES, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: TV GUIDE, INC. 2015-05-14 Assigned to UV CORP. reassignment UV CORP. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: UNITED VIDEO PROPERTIES, INC. 2015-05-14 Assigned to TV GUIDE, INC. reassignment TV GUIDE, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: UV CORP. Methods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. In some embodiments, a media application may use a content-recognition module to determine the context of an event and distribute itemized tasks to multiple entities in order to generate the supplemental information about the event. While viewing media assets (e.g., a television program), users may wish to learn more information about an event (e.g., a statement made by a person appearing in the media asset, the validity of a claim in an advertisement, etc.) occurring in the media asset. While some media assets allow a user to select additional options or added features (e.g., pop-up biographies about the cast and crew), when the added features appear and what topic the added features concern are determined by the content producer and not the user. Furthermore, as the added feature is derived from the content producer, the added feature may be biased or may present limited viewpoints about an event. Therefore, added features provided by a content producer may not provide the added information about an event that a user desires. In order to gain the added information that a user desires, the user may use additional devices (e.g., a laptop computer) to search (e.g., using an Internet search engine) for more information about the event. However, without knowing the proper context (e.g., who said the statement, what was the tone of the statement, when was the statement said, etc.) of the event or what search terms to use to describe the context of the event (e.g., how to describe the tone of the statement), a user may not be able to determine (even using a search engine) more information about the event. Moreover, the use of general search terms may not provide the accuracy or precision needed by the user. Furthermore, even if a user may eventually determine the information, the effort and time required may distract the user from the media asset. Accordingly, methods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. In some embodiments, a media application may use a content-recognition module to determine the context of an event in a media asset and distribute itemized tasks to multiple users in order to generate the supplemental information about the event. The context-recognition module prevents the user from being distracted from the media asset (e.g., while the user attempts to describe the context of the event or search for information about the event). In addition, by distributing tasks to multiple entities (e.g., crowd-sourcing), the media application may collect large amounts of information in relatively short periods of time (or in real-time) and aggregate and/or filter the information to generate the supplemental information about the event based on multiple viewpoints and/or sources. By using multiple viewpoints and/or sources, the media application enhances the completeness (e.g., by providing unbiased information) and accuracy of the supplemental information. For example, when a statement or action is made by a character or person appearing on a media asset (e.g., a television program), a user may request supplemental information about the statement or action. In response, the media application may determine the context of the statement (e.g., who said the statement and to what the statement was referring) or action (e.g., what was the reason for the action). After determining the context of the statement or action, the media application may itemize into tasks the additional information it requires in order to generate the supplemental information. The media application may then transmit requests including the tasks to a plurality of other users. Based on the responses from the plurality of other users, the media application may generate the supplemental information for display to the user. In some embodiments, a media application may use multiple types of content-recognition modules and/or algorithms to determine the context of an event. For example, the media application may process data associated with the event in order to determine the context of an event. In some embodiments, processing the various types of data may include cross-referencing the data in a database indicating the different contexts the event may have. In some embodiments, a media application may generate supplemental information about an event in a media asset in response to a user request. In order to generate the supplemental information, the media application may transmit, to multiple users, a request for additional information regarding a context of an event shown in a media asset. Upon receiving messages from the plurality of users that include the requested additional information, the media application may generate the supplemental information associated with the context of the event based on the messages. It should be noted, the systems and/or methods described above may be applied to, or used in accordance with, other systems, methods and/or apparatuses. FIG. 9 is a flowchart of illustrative steps for generating supplemental information based on additional information provided by a plurality of users in accordance with some embodiments of the disclosure. Accordingly, methods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. The methods and systems described herein alleviate the need for a user to determine the proper context (e.g., who said a statement, what was the tone of the statement, when was the statement said, etc.) of an event in a media asset, or the search terms to use to describe the event (e.g., the proper search terms to describe the tone of the statement), in order to determine more information about the event. In addition, the methods and systems increase the completeness and accuracy of the information compared to information gathered using traditional searching methods (e.g., an Internet search engine), without distracting the user from the media asset. In some embodiments, a media application may receive a user input from a user device for supplemental information about the context of an event shown in a media asset. The media application may determine additional information required to generate the supplemental information about the context of the event shown in a media asset, and transmit requests for the additional information to one or more users. The media application may receive one or more messages, which include the requested additional information, from the one or more users and generate the supplemental information based on the one or more message. The media application may then instruct the user device to display the supplemental information. As used herein, “supplemental information” refers to any information related to or associated with an event in a media asset. For example, supplemental information may include, but is not limited to, the verification of a statement or claim in a media asset, further descriptions and/or information about objects or entities shown and/or described in a media asset, and/or any other information, including, but not limited to, a video or audio segment, that may interest a user about an event in a media asset. In some embodiments, the media application may generate supplemental information based on one or more pieces of additional information. As used herein, “additional information” refers to any information used to generate supplemental information. For example, in an embodiment in which supplement information is the verification of a statement made by a person displayed in a media asset, and a request for the additional information from the media application includes a request for a fact needed to verify the factual basis of the statement, the additional information may be the fact used to verify the statement. For example, if an advertisement claims to have the best product on the market, the media application may use additional information such as the name of the product in question, a list of all other products in the market, and the results of a comparison study of the product in question to all other products to determine whether or not the product is actually the “best” product on the market. Additionally or alternatively, the media application may request industry and/or user reviews related to the event (e.g., reviews indicating the quality of the product). The media application may then use the information in the reviews to generate the supplemental information. As used herein, an “event” is any action (e.g., a verbal statement, opinion and/or physical movement), segment (e.g., a portion of a news broadcast featuring a particular topic), or other occurrence during a media asset that may be of particular interest to a user. For example, in some embodiments an event may be a statement or gesture made by a character or person in a media asset affirming or denying a claim. As referred to herein, the terms “media asset” and “content” should be understood to mean an electronically consumable user asset, such as television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, Webcasts, etc.), video clips, audio, content information, pictures, rotating images, documents, playlists, websites, articles, books, electronic books, blogs, advertisements, chat sessions, social media, applications, games, and/or any other media or multimedia and/or combination of the same. Media applications also allow users to navigate among and locate content. As referred to herein, the term “multimedia” should be understood to mean content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance. As referred to herein, the phrase “user equipment device,” “user equipment,” “user device,” “electronic device,” “electronic equipment,” “media equipment device,” or “media device” should be understood to mean any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a hand-held computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smart phone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same. In some embodiments, the user equipment device may have a front facing screen and a rear facing screen, multiple front screens, or multiple angled screens. In some embodiments, the user equipment device may have a front facing camera and/or a rear facing camera. On these user equipment devices, users may be able to navigate among and locate the same content available through a television. Consequently, media may be available on these devices, as well. The media provided may be for content available only through a television, for content available only through one or more of other types of user equipment devices, or for content available both through a television and one or more of the other types of user equipment devices. The media applications may be provided as on-line applications (i.e., provided on a website), or as stand-alone applications or clients on user equipment devices. Various devices and platforms that may implement media applications are described in more detail below. In some embodiments, a media application may transmit, to a plurality of users, a request for additional information regarding a context of an event shown in a media asset. As used herein, a “plurality of users” may include, but is not limited to any device, entity, or source of information that may process a request for additional information. For example, the plurality of users may include a person operating a user equipment device. In some embodiments, the person may receive (e.g., via e-mail, Internet posting, advertisement, or any other applicable information delivery method) the request from the media application for additional information, and in response generate a message (e.g., via a return e-mail, an answer to the Internet posting, a user input in the advertisement, or any other applicable method of transmitting information) that includes the additional information. It should be noted that in some embodiments, transmitting a request to a plurality of users may also include querying one or more databases (e.g., an Internet search engine or any other storage device, including, but not limited to, databases containing previously generated supplemental information and/or additional information) or consulting one or more data gathering services (e.g., a intelligent personal assistant application) for the additional information. In some embodiments, a media application may use a content-recognition module or algorithm to determine the context of an event and distribute itemized tasks to multiple users in order to generate the supplemental information about the event. The content-recognition module may use object recognition techniques such as edge detection, pattern recognition, including, but not limited to, self-learning systems (e.g., neural networks), optical character recognition, on-line character recognition (including but not limited to, dynamic character recognition, real-time character recognition, intelligent character recognition), and/or any other suitable technique or method to determine the objects and/or characteristics in media assets. For example, the media application may receive media assets in the form of a video. The video may include a series of frames. For each frame of the video, the media application may use a content-recognition module or algorithm to determine the context (e.g., the person that is speaking or a facial gesture affirming or denying a statement) of an event occurring during the frame or series of frames. In some embodiments, the content-recognition module or algorithm may also include speech recognition techniques, including but not limited to Hidden Markov Models, dynamic time warping, and/or neural networks (as described above) to translate spoken words into text. The content-recognition module may also use other techniques for processing audio and/or visual data. For example, the media application may monitor the volume of a statement in a media asset to determine the tone of the statement (e.g., a high volume may indicate an angry tone). In addition, the media application may use multiple types of optical character recognition and/or fuzzy logic, for example, when determining the context of a keyword(s) retrieved from data (e.g., media data, translated audio data, subtitle data, user-generated data, etc.) associated with the media asset (or when cross-referencing various types of data with databases indicating the different contexts of events as described below). For example, the particular data field may be a textual data field. Using fuzzy logic, the system may determine two fields and/or values to be identical even though the substance of the data field or value (e.g., two different spellings) is not identical. In some embodiments, the system may analyze particular data fields of a data structure or media asset frame for particular values or text. The data fields could be associated with characteristics, additional information, and/or any other data required for the function of the embodiments described herein. Furthermore, the data fields could contain values (e.g., the data fields could be expressed in binary or any other suitable code or programming language). As used herein, the “context” of an event refers to the set of circumstances or facts that surround a particular event that influence or affect the meaning of the event. For example, when determining the context of a written and/or spoken statement, the media application may determine who or what authored/stated the statement, the written and/or spoken words and/or other statements that preceded and/or followed the statement, the tone of the statement, and/or any other conditions that may alter the connotation of the statement. FIG. 1 shows an illustrative example of a media application that may be used to display supplemental information in accordance with some embodiments of the disclosure. Display 100 illustrates a display on a user device displaying a media asset. Display 108 illustrates a display featuring supplemental information as described and/or generated in FIGS. 6-9. It should be noted that display 100 and display 108 may be presented on any of the devices shown in FIGS. 3-4. For example, in some embodiments, display 100 and display 108 may be displayed on user equipment 402, 404, and/or 406 (FIG. 4). In FIG. 1, display 100 represents a display of a media asset (e.g., a streaming television program) on a user device (e.g., user equipment 402, 404, and/or 406 (FIG. 4)). Display 100 includes entity 102 and entity 104. In display 100, entity 104 is currently speaking as indicated by event 106. As shown in FIG. 1, event 106 is a statement (e.g., “We export a lot of coal”) by a person in the media asset. In some embodiments, display 108 represents the continued display of the media asset on a user device, after a user has requested supplemental information about event 106. For example, a media application may have received a user input (e.g., via user input interface 310 (FIG. 3)) while entity 104 was speaking. Using the systems and methods described herein (e.g., FIGS. 6-9), the media application generated supplemental information 110. Supplemental information 110 represents more information about event 106. For example, the media application (e.g., media application 206 (FIG. 2)) may have determined the context of event 106. Specifically, the media application may determine via a content-recognition module or algorithm the words spoken and/or actions by the person during the event. Additionally or alternatively, the media application may analyze the words and/or action during a predetermined amount of time (e.g., ten seconds) before and/or after the event (e.g., in order to better understand the context of the event). Furthermore, by cross-referencing the words and/or other information obtained by the content-recognition module (e.g., as discussed below in relation to FIG. 5) with a database, the content-recognition module determines that the term “we,” the person in the media asset refers to an organization or body. The content-recognition module or algorithm may also determine that the term “export” refers to shipping goods out of a country. The content-recognition module or algorithm may also determine that the term “a lot” refers to a particular numerical amount. Finally, the content-recognition module or algorithm may also determine that the term “coal” refers to a mineral of fossilized carbon. The content-recognition module or algorithm may also determine the relationships between words and/or other information obtained by the content-recognition module. For example, by processing the relationship between the words, the media application determines that event 106 is a statement regarding a particular amount of a particular substance shipped out of a particular country. Therefore, the media application determines that the request for supplemental information is likely a request to determine the validity of the statement. The media application then generates the supplemental information. The media application may also have stored supplemental information generated by previous requests (e.g., supplemental information generated in response to the same or different user viewing the media asset at an earlier date), and display the supplemental information again during the event (either in response to a user input requesting supplemental information or automatically without a user requesting supplemental information). FIG. 2 shows an illustrative example of a system that may be used to generate supplemental information (e.g., supplemental information 110 (FIG. 1)) based on additional information provided by a plurality of users in accordance with some embodiments of the disclosure. For example, in some embodiments, system 200 may be used to generate supplemental information (e.g., supplemental information 110 (FIG. 1)) on a display (e.g., display 108 (FIG. 1)) of a user device (e.g., user equipment 402, 404, and/or 406 (FIG. 4)). It should be noted that in some embodiments, the devices shown in FIG. 2 may correspond to one or more devices in FIGS. 3-4. FIG. 2 shows system 200. In system 200, a user is currently accessing a media asset on display 202. In some embodiments, display 202 may correspond to display 100 (FIG. 1)). During an event (e.g., event 106 (FIG. 1)) a user may have requested supplemental information about an event (e.g., event 106 (FIG. 1)) in display 202 using user device 204. Media application 206, which in some embodiments, may be implemented on user device 204 or at a remote location (e.g., supplemental information source 418 (FIG. 4)), receives the request for supplemental information. Media application 206 determines the context of the event (e.g., who said the statement making up the event and to what the statement was referring). After determining the context of the statement, the media application may itemize into one or more tasks, additional information (e.g., facts) it requires in order to generate the supplemental information (e.g., a verification or correction of the factual basis of the statement). For example, if the event is a statement about the amount of coal that is exported from the United States (e.g., as described in relation to FIG. 1 above), media application 206 may determine the fact required to generate the supplemental information is the exact numerical amount of coal that is exported from the United States. The media application may then transmit requests for the additional information (e.g., a request for the exact numerical amount of coal that is exported from the United States) to a plurality of other users. In FIG. 2, users operating user device 208, user device 210, and user device 212 represent a plurality of users. Having determined the additional information it requires in order to generate the supplemental information, media application 206 requests the additional information from the plurality of users. In system 200, media application 206 has transmitted the same task (e.g., the same question) to each of the plurality of users. In some embodiments, one or more of the users may receive different tasks. For example, by breaking the additional information into small, independent tasks, media application 206 may increase the speed (e.g., multiple users may work concurrently to solve different parts of a problem) and accuracy (e.g., reducing the tasks to smaller, less complex problems reduces the chance of human error) of the additional information returned by the plurality of users. In addition, by breaking the additional information into small, independent tasks, the plurality of users may not know to what they are contributing (enhancing the privacy of the user that requested the supplemental information), however, the plurality of users can still be effective in their individual tasks. In addition, by breaking the additional information into small, independent tasks, the media application may more easily outsource the requests for additional information. For example, one or more of the tasks used to generate the additional information may be the same as one or more of the tasks used to generate other additional information (e.g., additional information used to generate different supplemental information in response to a request for supplemental information about the same or a different event issued by the same or a different user). The response to each of the request and/or the additional information may be stored (e.g., on any of the devices accessible by communications network 414 (FIG. 4)) for subsequent retrieval. Based on the responses, transmitted as messages including the additional information, from the plurality of other users, media application 206 may generate the supplemental information (e.g., supplemental information 110 (FIG. 1)) for display to the user on the user device 204. For example, media application may aggregate, append, and/or compare the additional information in each of the messages received from the plurality of users. The supplemental information may then be generated based on the aggregated, appended, and/or compared additional information (e.g., as described in FIG. 9 below). In some embodiments, the plurality of users may receive summary information about the event with the request for additional information. (e.g., a video clip of a portion or segment of the media asset, a textual description, etc.), which may help the plurality of users provide additional information. For example, in some embodiments, the media application may instead of (or in addition to) determining the context of an event, determine a particular portion of the event that would be needed for the plurality of users to provide additional information about the event. For example, the media application may use progress information associated with the progress of the media asset (e.g., line 506 (FIG. 5)) to determine at what point during the progression of the media asset the event occurred, and in response, transmit a portion of the media asset beginning ten second before that point and ending ten seconds after that point. For example, if the event is a statement made by a character or person in a media asset, the media application may determine when the statement began (e.g., the point of progress of the media asset in which the statement began) and ended. The media application may then include the portion containing the entire statement (and the event) in the request for additional information sent to the plurality of users. The selected portion may include any amount of summary information that the media application determines is necessary for the user or any one of the plurality of users to understand the main action sequence. This summary information (e.g., a portion of the media asset) may be included with the request for additional information (e.g., in a file transmitted with the request), or may be included with the generated supplemental information as a reference for the user. For example, the media application may select a segment of the play length of the media asset or a particular scene of the media asset, which includes the event, for to display to the plurality of users along with the request for additional information. For example, if an event (e.g., a statement) was in response to a question, the media application may also determine when the question began and ended, and send the entire question (or the play length of the media asset corresponding to the question) to the plurality of users as well. After determining the portion to provide to the plurality of users (e.g., a segment including the ten seconds before and the ten seconds after the event), the media application may provide the summary information of the event and any other material needed by the plurality of users to understand the event and/or request for supplemental information from the user. In some embodiments, a portion of the media asset containing the event, as selected by the media application, may also include any amount of the play length of the media asset, or any amount of scenes or segments from the media asset. In some embodiments, the portion may include segments of the play length of the media asset or scenes from the media asset that are not adjacent during the normal playback of the media asset. For example, in some embodiments, a portion of the media asset may include one or more sequences or scenes of interest to the plurality of users, even though the particular sequences or scenes are featured at different points in the play length of the media asset. The media application may determine the segments or scenes to include based on a content recognition file (e.g., data structure 500 (FIG. 5)) describing the media asset. For example, if a plot point or other information, which may be relevant to an event is displayed earlier in the media asset, the summary information may include a portion of the media asset displaying the plot point. In some embodiments, the length of a portion may be determined based on the genre of the media asset. In some embodiments, the length of the portion may depend on a user profile for the user or for anyone of the plurality of users. For example, a user profile and/or a content recognition file (e.g., data structure 500 (FIG. 5)) may indicate that a particular user may require more or less additional content. For example, the user may be aware of particular characters or plot points in the media asset and, therefore, may not require the additional content to introduce those aspects. In some embodiments, the plurality of users may receive a particular user interface, which organizes the data about the event (e.g., a clip of the actual event, summary information about the event, information about the request for supplemental information issued by the user, etc.). The interface may also include an automatic submission form, which may be used to generate a message, which is sent to the media application. In some embodiments, the media application may also receive user input from the user requesting the supplemental information that further affects the generation of supplemental information by the media application. For example, the user may request the supplemental information includes particular information (e.g., the factual basis of a statement), may request a multimedia format of the supplemental information (e.g., textual description, a video clip, etc.), may request a form of the supplemental information (e.g., a short description about the event, an Internet link to other sources of information on the event, or a true or false designation about the event) by entering user inputs (e.g., via user input interface 310 (FIG. 3)). It should be noted that any information or process referred to in this disclosure that is referred to as being in response to a user input may alternatively and/or additionally be performed automatically by the media application (e.g., via control circuitry 304 (FIG. 3)). For example, in some embodiments, a user may request a true or false designation (e.g., an on-screen pop-up box indicating whether an event was true or false). Additionally and/or alternatively, in some embodiments, the true or false designation may appear automatically based on predetermined settings indicating to the media application to display a true or false designation in response to detecting an event. In some embodiments, an indicator that supplemental information has previously been generated or is currently ready to generate (e.g., a plurality of users are available) may be displayed to a user (e.g., on display 100 (FIG. 1) during the event). The indicator may also indicate the particular information, the multimedia format, and/or the form of supplemental information that is available. An indicator may also appear with the supplemental information (e.g., supplemental information 110 (FIG. 1)), which allows the user to request additional supplemental information or provide feedback/responses (e.g., rating the quality of the supplemental information) to the media application and/or plurality of users. In some embodiments, a user may also access (e.g., via selection of an indicator and/or automatically upon the supplemental information being generated) summary information about the event. For example, in some embodiments (e.g., when the supplemental information is not generated in real-time), the media asset may have progressed to a different point by the time the supplemental information is ready for display. Therefore, the media application may need to provide a video clip of the event or other summary information, so that the user remembers about what or why the supplemental information was requested. FIG. 3 is a block diagram of an illustrative user equipment device in accordance with some embodiments of the disclosure. It should be noted that the components shown in FIG. 3 may be used to store, receive, transmit, and/or display the media assets, additional information, and/or supplemental information as described herein. For example, media application 206 (FIG. 2) may be implemented on user equipment device 300, and may issue instructions (e.g., displaying supplemental information 110 (FIG. 1)) via control circuitry 304. Users may access media assets and the media application (and its display screens described above and below) from one or more of their user equipment devices. FIG. 3 shows a generalized embodiment of illustrative user equipment device 300. More specific implementations of user equipment devices are discussed below in connection with FIG. 4. User equipment device 300 may receive content and data via input/output (hereinafter “I/O”) path 302. I/O path 302 may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry 304, which includes processing circuitry 306 and storage 308. Control circuitry 304 may be used to send and receive commands, requests, and other suitable data using I/O path 302. I/O path 302 may connect control circuitry 304 (and specifically processing circuitry 306) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path in FIG. 3 to avoid overcomplicating the drawing. Control circuitry 304 may be based on any suitable processing circuitry such as processing circuitry 306. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How does a media application determine the context of an event? Only give me the answer and do not output any other words.
Answer:
[ "It uses a content-recognition module or algorithm." ]
276
6,954
single_doc_qa
fdf1d8e80a4d469a9f0b0ab76769e538
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War. Following the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command. Early life and career Hugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea. Although he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname "Huge" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea. Goodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner. He then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator. Goodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King. In June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States. He was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College. Goodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and West Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship . When his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940. World War II Following the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942. By the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best. During the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas. The air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her. Goodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat "V". He was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force. Goodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat "V". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation. Postwar service Following the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945. Goodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy. Revolt of the Admirals In April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force. Goodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus "rob" the branches of autonomy. The outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier. Later service Due to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS). While in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953. Goodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955. On December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States. Retirement Goodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States. Vice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training. Decorations Here is the ribbon bar of Vice admiral Hugh H. Goodwin: References 1900 births 1980 deaths People from Monroe, Louisiana Military personnel from Louisiana United States Naval Academy alumni Naval War College alumni United States Naval Aviators United States Navy personnel of World War I United States Navy World War II admirals United States Navy vice admirals United States submarine commanders Recipients of the Legion of Merit Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: When did Goodwin become a Naval aviator? Only give me the answer and do not output any other words.
Answer:
[ "Goodwin became a Naval aviator in January 1929." ]
276
2,991
single_doc_qa
e5232693446548c4be3b6440807864ec
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Nowadays deep learning techniques outperform the other conventional methods in most of the speech-related tasks. Training robust deep neural networks for each task depends on the availability of powerful processing GPUs, as well as standard and large scale datasets. In text-independent speaker verification, large-scale datasets are available, thanks to the NIST SRE evaluations and other data collection projects such as VoxCeleb BIBREF0. In text-dependent speaker recognition, experiments with end-to-end architectures conducted on large proprietary databases have demonstrated their superiority over traditional approaches BIBREF1. Yet, contrary to text-independent speaker recognition, text-dependent speaker recognition lacks large-scale publicly available databases. The two most well-known datasets are probably RSR2015 BIBREF2 and RedDots BIBREF3. The former contains speech data collected from 300 individuals in a controlled manner, while the latter is used primarily for evaluation rather than training, due to its small number of speakers (only 64). Motivated by this lack of large-scale dataset for text-dependent speaker verification, we chose to proceed with the collection of the DeepMine dataset, which we expect to become a standard benchmark for the task. Apart from speaker recognition, large amounts of training data are required also for training automatic speech recognition (ASR) systems. Such datasets should not only be large in size, they should also be characterized by high variability with respect to speakers, age and dialects. While several datasets with these properties are available for languages like English, Mandarin, French, this is not the case for several other languages, such as Persian. To this end, we proceeded with collecting a large-scale dataset, suitable for building robust ASR models in Persian. The main goal of the DeepMine project was to collect speech from at least a few thousand speakers, enabling research and development of deep learning methods. The project started at the beginning of 2017, and after designing the database and the developing Android and server applications, the data collection began in the middle of 2017. The project finished at the end of 2018 and the cleaned-up and final version of the database was released at the beginning of 2019. In BIBREF4, the running project and its data collection scenarios were described, alongside with some preliminary results and statistics. In this paper, we announce the final and cleaned-up version of the database, describe its different parts and provide various evaluation setups for each part. Finally, since the database was designed mainly for text-dependent speaker verification purposes, some baseline results are reported for this task on the official evaluation setups. Additional baseline results are also reported for Persian speech recognition. However, due to the space limitation in this paper, the baseline results are not reported for all the database parts and conditions. They will be defined and reported in the database technical documentation and in a future journal paper. Data Collection DeepMine is publicly available for everybody with a variety of licenses for different users. It was collected using crowdsourcing BIBREF4. The data collection was done using an Android application. Each respondent installed the application on his/her personal device and recorded several phrases in different sessions. The Android application did various checks on each utterance and if it passed all of them, the respondent was directed to the next phrase. For more information about data collection scenario, please refer to BIBREF4. Data Collection ::: Post-Processing In order to clean-up the database, the main post-processing step was to filter out problematic utterances. Possible problems include speaker word insertions (e.g. repeating some part of a phrase), deletions, substitutions, and involuntary disfluencies. To detect these, we implemented an alignment stage, similar to the second alignment stage in the LibriSpeech project BIBREF5. In this method, a custom decoding graph was generated for each phrase. The decoding graph allows for word skipping and word insertion in the phrase. For text-dependent and text-prompted parts of the database, such errors are not allowed. Hence, any utterances with errors were removed from the enrollment and test lists. For the speech recognition part, a sub-part of the utterance which is correctly aligned to the corresponding transcription is kept. After the cleaning step, around 190 thousand utterances with full transcription and 10 thousand with sub-part alignment have remained in the database. Data Collection ::: Statistics After processing the database and removing problematic respondents and utterances, 1969 respondents remained in the database, with 1149 of them being male and 820 female. 297 of the respondents could not read English and have therefore read only the Persian prompts. About 13200 sessions were recorded by females and similarly, about 9500 sessions by males, i.e. women are over-represented in terms of sessions, even though their number is 17% smaller than that of males. Other useful statistics related to the database are shown in Table TABREF4. The last status of the database, as well as other related and useful information about its availability can be found on its website, together with a limited number of samples. DeepMine Database Parts The DeepMine database consists of three parts. The first one contains fixed common phrases to perform text-dependent speaker verification. The second part consists of random sequences of words useful for text-prompted speaker verification, and the last part includes phrases with word- and phoneme-level transcription, useful for text-independent speaker verification using a random phrase (similar to Part4 of RedDots). This part can also serve for Persian ASR training. Each part is described in more details below. Table TABREF11 shows the number of unique phrases in each part of the database. For the English text-dependent part, the following phrases were selected from part1 of the RedDots database, hence the RedDots can be used as an additional training set for this part: “My voice is my password.” “OK Google.” “Artificial intelligence is for real.” “Actions speak louder than words.” “There is no such thing as a free lunch.” DeepMine Database Parts ::: Part1 - Text-dependent (TD) This part contains a set of fixed phrases which are used to verify speakers in text-dependent mode. Each speaker utters 5 Persian phrases, and if the speaker can read English, 5 phrases selected from Part1 of the RedDots database are also recorded. We have created three experimental setups with different numbers of speakers in the evaluation set. For each setup, speakers with more recording sessions are included in the evaluation set and the rest of the speakers are used for training in the background set (in the database, all background sets are basically training data). The rows in Table TABREF13 corresponds to the different experimental setups and shows the numbers of speakers in each set. Note that, for English, we have filtered the (Persian native) speakers by the ability to read English. Therefore, there are fewer speakers in each set for English than for Persian. There is a small “dev” set in each setup which can be used for parameter tuning to prevent over-tuning on the evaluation set. For each experimental setup, we have defined several official trial lists with different numbers of enrollment utterances per trial in order to investigate the effects of having different amounts of enrollment data. All trials in one trial list have the same number of enrollment utterances (3 to 6) and only one test utterance. All enrollment utterances in a trial are taken from different consecutive sessions and the test utterance is taken from yet another session. From all the setups and conditions, the 100-spk with 3-session enrollment (3-sess) is considered as the main evaluation condition. In Table TABREF14, the number of trials for Persian 3-sess are shown for the different types of trial in the text-dependent speaker verification (SV). Note that for Imposter-Wrong (IW) trials (i.e. imposter speaker pronouncing wrong phrase), we merely create one wrong trial for each Imposter-Correct (IC) trial to limit the huge number of possible trials for this case. So, the number of trials for IC and IW cases are the same. DeepMine Database Parts ::: Part2 - Text-prompted (TP) For this part, in each session, 3 random sequences of Persian month names are shown to the respondent in two modes: In the first mode, the sequence consists of all 12 months, which will be used for speaker enrollment. The second mode contains a sequence of 3 month names that will be used as a test utterance. In each 8 sessions received by a respondent from the server, there are 3 enrollment phrases of all 12 months (all in just one session), and $7 \times 3$ other test phrases, containing fewer words. For a respondent who can read English, 3 random sequences of English digits are also recorded in each session. In one of the sessions, these sequences contain all digits and the remaining ones contain only 4 digits. Similar to the text-dependent case, three experimental setups with different number of speaker in the evaluation set are defined (corresponding to the rows in Table TABREF16). However, different strategy is used for defining trials: Depending on the enrollment condition (1- to 3-sess), trials are enrolled on utterances of all words from 1 to 3 different sessions (i.e. 3 to 9 utterances). Further, we consider two conditions for test utterances: seq test utterance with only 3 or 4 words and full test utterances with all words (i.e. same words as in enrollment but in different order). From all setups an all conditions, the 100-spk with 1-session enrolment (1-sess) is considered as the main evaluation condition for the text-prompted case. In Table TABREF16, the numbers of trials (sum for both seq and full conditions) for Persian 1-sess are shown for the different types of trials in the text-prompted SV. Again, we just create one IW trial for each IC trial. DeepMine Database Parts ::: Part3 - Text-independent (TI) In this part, 8 Persian phrases that have already been transcribed on the phone level are displayed to the respondent. These phrases are chosen mostly from news and Persian Wikipedia. If the respondent is unable to read English, instead of 5 fixed phrases and 3 random digit strings, 8 other Persian phrases are also prompted to the respondent to have exactly 24 phrases in each recording session. This part can be useful at least for three potential applications. First, it can be used for text-independent speaker verification. The second application of this part (same as Part4 of RedDots) is text-prompted speaker verification using random text (instead of a random sequence of words). Finally, the third application is large vocabulary speech recognition in Persian (explained in the next sub-section). Based on the recording sessions, we created two experimental setups for speaker verification. In the first one, respondents with at least 17 recording sessions are included to the evaluation set, respondents with 16 sessions to the development and the rest of respondents to the background set (can be used as training data). In the second setup, respondents with at least 8 sessions are included to the evaluation set, respondents with 6 or 7 sessions to the development and the rest of respondents to the background set. Table TABREF18 shows numbers of speakers in each set of the database for text-independent SV case. For text-independent SV, we have considered 4 scenarios for enrollment and 4 scenarios for test. The speaker can be enrolled using utterances from 1, 2 or 3 consecutive sessions (1sess to 3sess) or using 8 utterances from 8 different sessions. The test speech can be one utterance (1utt) for short duration scenario or all utterances in one session (1sess) for long duration case. In addition, test speech can be selected from 5 English phrases for cross-language testing (enrollment using Persian utterances and test using English utterances). From all setups, 1sess-1utt and 1sess-1sess for 438-spk set are considered as the main evaluation setups for text-independent case. Table TABREF19 shows number of trials for these setups. For text-prompted SV with random text, the same setup as text-independent case together with corresponding utterance transcriptions can be used. DeepMine Database Parts ::: Part3 - Speech Recognition As explained before, Part3 of the DeepMine database can be used for Persian read speech recognition. There are only a few databases for speech recognition in Persian BIBREF6, BIBREF7. Hence, this part can at least partly address this problem and enable robust speech recognition applications in Persian. Additionally, it can be used for speaker recognition applications, such as training deep neural networks (DNNs) for extracting bottleneck features BIBREF8, or for collecting sufficient statistics using DNNs for i-vector training. We have randomly selected 50 speakers (25 for each gender) from the all speakers in the database which have net speech (without silence parts) between 25 minutes to 50 minutes as test speakers. For each speaker, the utterances in the first 5 sessions are included to (small) test-set and the other utterances of test speakers are considered as a large-test-set. The remaining utterances of the other speakers are included in the training set. The test-set, large-test-set and train-set contain 5.9, 28.5 and 450 hours of speech respectively. There are about 8300 utterances in Part3 which contain only Persian full names (i.e. first and family name pairs). Each phrase consists of several full names and their phoneme transcriptions were extracted automatically using a trained Grapheme-to-Phoneme (G2P). These utterances can be used to evaluate the performance of a systems for name recognition, which is usually more difficult than the normal speech recognition because of the lack of a reliable language model. Experiments and Results Due to the space limitation, we present results only for the Persian text-dependent speaker verification and speech recognition. Experiments and Results ::: Speaker Verification Experiments We conducted an experiment on text-dependent speaker verification part of the database, using the i-vector based method proposed in BIBREF9, BIBREF10 and applied it to the Persian portion of Part1. In this experiment, 20-dimensional MFCC features along with first and second derivatives are extracted from 16 kHz signals using HTK BIBREF11 with 25 ms Hamming windowed frames with 15 ms overlap. The reported results are obtained with a 400-dimensional gender independent i-vector based system. The i-vectors are first length-normalized and are further normalized using phrase- and gender-dependent Regularized Within-Class Covariance Normalization (RWCCN) BIBREF10. Cosine distance is used to obtain speaker verification scores and phrase- and gender-dependent s-norm is used for normalizing the scores. For aligning speech frames to Gaussian components, monophone HMMs with 3 states and 8 Gaussian components in each state are used BIBREF10. We only model the phonemes which appear in the 5 Persian text-dependent phrases. For speaker verification experiments, the results were reported in terms of Equal Error Rate (EER) and Normalized Detection Cost Function as defined for NIST SRE08 ($\mathrm {NDCF_{0.01}^{min}}$) and NIST SRE10 ($\mathrm {NDCF_{0.001}^{min}}$). As shown in Table TABREF22, in text-dependent SV there are 4 types of trials: Target-Correct and Imposter-Correct refer to trials when the pass-phrase is uttered correctly by target and imposter speakers respectively, and in same manner, Target-Wrong and Imposter-Wrong refer to trials when speakers uttered a wrong pass-phrase. In this paper, only the correct trials (i.e. Target-Correct as target trials vs Imposter-Correct as non-target trials) are considered for evaluating systems as it has been proved that these are the most challenging trials in text-dependent SV BIBREF8, BIBREF12. Table TABREF23 shows the results of text-dependent experiments using Persian 100-spk and 3-sess setup. For filtering trials, the respondents' mobile brand and model were used in this experiment. In the table, the first two letters in the filter notation relate to the target trials and the second two letters (i.e. right side of the colon) relate for non-target trials. For target trials, the first Y means the enrolment and test utterances were recorded using a device with the same brand by the target speaker. The second Y letter means both recordings were done using exactly the same device model. Similarly, the first Y for non-target trials means that the devices of target and imposter speakers are from the same brand (i.e. manufacturer). The second Y means that, in addition to the same brand, both devices have the same model. So, the most difficult target trials are “NN”, where the speaker has used different a device at the test time. In the same manner, the most difficult non-target trials which should be rejected by the system are “YY” where the imposter speaker has used the same device model as the target speaker (note that it does not mean physically the same device because each speaker participated in the project using a personal mobile device). Hence, the similarity in the recording channel makes rejection more difficult. The first row in Table TABREF23 shows the results for all trials. By comparing the results with the best published results on RSR2015 and RedDots BIBREF10, BIBREF8, BIBREF12, it is clear that the DeepMine database is more challenging than both RSR2015 and RedDots databases. For RSR2015, the same i-vector/HMM-based method with both RWCCN and s-norm has achieved EER less than 0.3% for both genders (Table VI in BIBREF10). The conventional Relevance MAP adaptation with HMM alignment without applying any channel-compensation techniques (i.e. without applying RWCCN and s-norm due to the lack of suitable training data) on RedDots Part1 for the male has achieved EER around 1.5% (Table XI in BIBREF10). It is worth noting that EERs for DeepMine database without any channel-compensation techniques are 2.1 and 3.7% for males and females respectively. One interesting advantage of the DeepMine database compared to both RSR2015 and RedDots is having several target speakers with more than one mobile device. This is allows us to analyse the effects of channel compensation methods. The second row in Table TABREF23 corresponds to the most difficult trials where the target trials come from mobile devices with different models while imposter trials come from the same device models. It is clear that severe degradation was caused by this kind of channel effects (i.e. decreasing within-speaker similarities while increasing between-speaker similarities), especially for females. The results in the third row show the condition when target speakers at the test time use exactly the same device that was used for enrollment. Comparing this row with the results in the first row proves how much improvement can be achieved when exactly the same device is used by the target speaker. The results in the fourth row show the condition when imposter speakers also use the same device model at test time to fool the system. So, in this case, there is no device mismatch in all trials. By comparing the results with the third row, we can see how much degradation is caused if we only consider the non-target trials with the same device. The fifth row shows similar results when the imposter speakers use device of the same brand as the target speaker but with a different model. Surprisingly, in this case, the degradation is negligible and it means that mobiles from a specific brand (manufacturer) have different recording channel properties. The degraded female results in the sixth row as compared to the third row show the effect of using a different device model from the same brand for target trials. For males, the filters brings almost the same subsets of trials, which explains the very similar results in this case. Looking at the first two and the last row of Table TABREF23, one can notice the significantly worse performance obtained for the female trials as compared to males. Note that these three rows include target trials where the devices used for enrollment do not necessarily match the devices used for recording test utterances. On the other hand, in rows 3 to 6, which exclude such mismatched trials, the performance for males and females is comparable. This suggest that the degraded results for females are caused by some problematic trials with device mismatch. The exact reason for this degradation is so far unclear and needs a further investigation. In the last row of the table, the condition of the second row is relaxed: the target device should have different model possibly from the same brand and imposter device only needs to be from the same brand. In this case, as was expected, the performance degradation is smaller than in the second row. Experiments and Results ::: Speech Recognition Experiments In addition to speaker verification, we present several speech recognition experiments on Part3. The experiments were performed with the Kaldi toolkit BIBREF13. For training HMM-based MonoPhone model, only 20 thousands of shortest utterances are used and for other models the whole training data is used. The DNN based acoustic model is a time-delay DNN with low-rank factorized layers and skip connections without i-vector adaptation (a modified network from one of the best performing LibriSpeech recipes). The network is shown in Table TABREF25: there are 16 F-TDNN layers, with dimension 1536 and linear bottleneck layers of dimension 256. The acoustic model is trained for 10 epochs using lattice-free maximum mutual information (LF-MMI) with cross-entropy regularization BIBREF14. Re-scoring is done using a pruned trigram language model and the size of the dictionary is around 90,000 words. Table TABREF26 shows the results in terms of word error rate (WER) for different evaluated methods. As can be seen, the created database can be used to train well performing and practically usable Persian ASR models. Conclusions In this paper, we have described the final version of a large speech corpus, the DeepMine database. It has been collected using crowdsourcing and, according to the best of our knowledge, it is the largest public text-dependent and text-prompted speaker verification database in two languages: Persian and English. In addition, it is the largest text-independent speaker verification evaluation database, making it suitable to robustly evaluate state-of-the-art methods on different conditions. Alongside these appealing properties, it comes with phone-level transcription, making it suitable to train deep neural network models for Persian speech recognition. We provided several evaluation protocols for each part of the database. The protocols allow researchers to investigate the performance of different methods in various scenarios and study the effects of channels, duration and phrase text on the performance. We also provide two test sets for speech recognition: One normal test set with a few minutes of speech for each speaker and one large test set with more (30 minutes on average) speech that can be used for any speaker adaptation method. As baseline results, we reported the performance of an i-vector/HMM based method on Persian text-dependent part. Moreover, we conducted speech recognition experiments using conventional HMM-based methods, as well as state-of-the-art deep neural network based method using Kaldi toolkit with promising performance. Text-dependent results have shown that the DeepMine database is more challenging than RSR2015 and RedDots databases. Acknowledgments The data collection project was mainly supported by Sharif DeepMine company. The work on the paper was supported by Czech National Science Foundation (GACR) project "NEUREM3" No. 19-26934X and the National Programme of Sustainability (NPU II) project "IT4Innovations excellence in science - LQ1602". Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: what is the source of the data? Only give me the answer and do not output any other words.
Answer:
[ "Android application" ]
276
4,966
single_doc_qa
ec666895f10a412d82baa5ee4aee1b5b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Over the past two decades, the emergence of social media has enabled the proliferation of traceable human behavior. The content posted by users can reflect who their friends are, what topics they are interested in, or which company they are working for. At the same time, users are listing a number of profile fields to define themselves to others. The utilization of such metadata has proven important in facilitating further developments of applications in advertising BIBREF0 , personalization BIBREF1 , and recommender systems BIBREF2 . However, profile information can be limited, depending on the platform, or it is often deliberately omitted BIBREF3 . To uncloak this information, a number of studies have utilized social media users' footprints to approximate their profiles. This paper explores the potential of predicting a user's industry –the aggregate of enterprises in a particular field– by identifying industry indicative text in social media. The accurate prediction of users' industry can have a big impact on targeted advertising by minimizing wasted advertising BIBREF4 and improved personalized user experience. A number of studies in the social sciences have associated language use with social factors such as occupation, social class, education, and income BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . An additional goal of this paper is to examine such findings, and in particular the link between language and occupational class, through a data-driven approach. In addition, we explore how meaning changes depending on the occupational context. By leveraging word embeddings, we seek to quantify how, for example, cloud might mean a separate concept (e.g., condensed water vapor) in the text written by users that work in environmental jobs while it might be used differently by users in technology occupations (e.g., Internet-based computing). Specifically, this paper makes four main contributions. First, we build a large, industry-annotated dataset that contains over 20,000 blog users. In addition to their posted text, we also link a number of user metadata including their gender, location, occupation, introduction and interests. Second, we build content-based classifiers for the industry prediction task and study the effect of incorporating textual features from the users' profile metadata using various meta-classification techniques, significantly improving both the overall accuracy and the average per industry accuracy. Next, after examining which words are indicative for each industry, we build vector-space representations of word meanings and calculate one deviation for each industry, illustrating how meaning is differentiated based on the users' industries. We qualitatively examine the resulting industry-informed semantic representations of words by listing the words per industry that are most similar to job related and general interest terms. Finally, we rank the different industries based on the normalized relative frequencies of emotionally charged words (positive and negative) and, in addition, discover that, for both genders, these frequencies do not statistically significantly correlate with an industry's gender dominance ratio. After discussing related work in Section SECREF2 , we present the dataset used in this study in Section SECREF3 . In Section SECREF4 we evaluate two feature selection methods and examine the industry inference problem using the text of the users' postings. We then augment our content-based classifier by building an ensemble that incorporates several metadata classifiers. We list the most industry indicative words and expose how each industrial semantic field varies with respect to a variety of terms in Section SECREF5 . We explore how the frequencies of emotionally charged words in each gender correlate with the industries and their respective gender dominance ratio and, finally, conclude in Section SECREF6 . Related Work Alongside the wide adoption of social media by the public, researchers have been leveraging the newly available data to create and refine models of users' behavior and profiling. There exists a myriad research that analyzes language in order to profile social media users. Some studies sought to characterize users' personality BIBREF9 , BIBREF10 , while others sequenced the expressed emotions BIBREF11 , studied mental disorders BIBREF12 , and the progression of health conditions BIBREF13 . At the same time, a number of researchers sought to predict the social media users' age and/or gender BIBREF14 , BIBREF15 , BIBREF16 , while others targeted and analyzed the ethnicity, nationality, and race of the users BIBREF17 , BIBREF18 , BIBREF19 . One of the profile fields that has drawn a great deal of attention is the location of a user. Among others, Hecht et al. Hecht11 predicted Twitter users' locations using machine learning on nationwide and state levels. Later, Han et al. Han14 identified location indicative words to predict the location of Twitter users down to the city level. As a separate line of research, a number of studies have focused on discovering the political orientation of users BIBREF15 , BIBREF20 , BIBREF21 . Finally, Li et al. Li14a proposed a way to model major life events such as getting married, moving to a new place, or graduating. In a subsequent study, BIBREF22 described a weakly supervised information extraction method that was used in conjunction with social network information to identify the name of a user's spouse, the college they attended, and the company where they are employed. The line of work that is most closely related to our research is the one concerned with understanding the relation between people's language and their industry. Previous research from the fields of psychology and economics have explored the potential for predicting one's occupation from their ability to use math and verbal symbols BIBREF23 and the relationship between job-types and demographics BIBREF24 . More recently, Huang et al. Huang15 used machine learning to classify Sina Weibo users to twelve different platform-defined occupational classes highlighting the effect of homophily in user interactions. This work examined only users that have been verified by the Sina Weibo platform, introducing a potential bias in the resulting dataset. Finally, Preotiuc-Pietro et al. Preoctiuc15 predicted the occupational class of Twitter users using the Standard Occupational Classification (SOC) system, which groups the different jobs based on skill requirements. In that work, the data collection process was limited to only users that specifically mentioned their occupation in their self-description in a way that could be directly mapped to a SOC occupational class. The mapping between a substring of their self-description and a SOC occupational class was done manually. Because of the manual annotation step, their method was not scalable; moreover, because they identified the occupation class inside a user self-description, only a very small fraction of the Twitter users could be included (in their case, 5,191 users). Both of these recent studies are based on micro-blogging platforms, which inherently restrict the number of characters that a post can have, and consequently the way that users can express themselves. Moreover, both studies used off-the-shelf occupational taxonomies (rather than self-declared occupation categories), resulting in classes that are either too generic (e.g., media, welfare and electronic are three of the twelve Sina Weibo categories), or too intermixed (e.g., an assistant accountant is in a different class from an accountant in SOC). To address these limitations, we investigate the industry prediction task in a large blog corpus consisting of over 20K American users, 40K web-blogs, and 560K blog posts. Dataset We compile our industry-annotated dataset by identifying blogger profiles located in the U.S. on the profile finder on http://www.blogger.com, and scraping only those users that had the industry profile element completed. For each of these bloggers, we retrieve all their blogs, and for each of these blogs we download the 21 most recent blog postings. We then clean these blog posts of HTML tags and tokenize them, and drop those bloggers whose cumulative textual content in their posts is less than 600 characters. Following these guidelines, we identified all the U.S. bloggers with completed industry information. Traditionally, standardized industry taxonomies organize economic activities into groups based on similar production processes, products or services, delivery systems or behavior in financial markets. Following such assumptions and regardless of their many similarities, a tomato farmer would be categorized into a distinct industry from a tobacco farmer. As demonstrated in Preotiuc-Pietro et al. Preoctiuc15 such groupings can cause unwarranted misclassifications. The Blogger platform provides a total of 39 different industry options. Even though a completed industry value is an implicit text annotation, we acknowledge the same problem noted in previous studies: some categories are too broad, while others are very similar. To remedy this and following Guibert et al. Guibert71, who argued that the denominations used in a classification must reflect the purpose of the study, we group the different Blogger industries based on similar educational background and similar technical terminology. To do that, we exclude very general categories and merge conceptually similar ones. Examples of broad categories are the Education and the Student options: a teacher could be teaching in any concentration, while a student could be enrolled in any discipline. Examples of conceptually similar categories are the Investment Banking and the Banking options. The final set of categories is shown in Table TABREF1 , along with the number of users in each category. The resulting dataset consists of 22,880 users, 41,094 blogs, and 561,003 posts. Table TABREF2 presents additional statistics of our dataset. Text-based Industry Modeling After collecting our dataset, we split it into three sets: a train set, a development set, and a test set. The sizes of these sets are 17,880, 2,500, and 2,500 users, respectively, with users randomly assigned to these sets. In all the experiments that follow, we evaluate our classifiers by training them on the train set, configure the parameters and measure performance on the development set, and finally report the prediction accuracy and results on the test set. Note that all the experiments are performed at user level, i.e., all the data for one user is compiled into one instance in our data sets. To measure the performance of our classifiers, we use the prediction accuracy. However, as shown in Table TABREF1 , the available data is skewed across categories, which could lead to somewhat distorted accuracy numbers depending on how well a model learns to predict the most populous classes. Moreover, accuracy alone does not provide a great deal of insight into the individual performance per industry, which is one of the main objectives in this study. Therefore, in our results below, we report: (1) micro-accuracy ( INLINEFORM0 ), calculated as the percentage of correctly classified instances out of all the instances in the development (test) data; and (2) macro-accuracy ( INLINEFORM1 ), calculated as the average of the per-category accuracies, where the per-category accuracy is the percentage of correctly classified instances out of the instances belonging to one category in the development (test) data. Leveraging Blog Content In this section, we seek the effectiveness of using solely textual features obtained from the users' postings to predict their industry. The industry prediction baseline Majority is set by discovering the most frequently featured class in our training set and picking that class in all predictions in the respective development or testing set. After excluding all the words that are not used by at least three separate users in our training set, we build our AllWords model by counting the frequencies of all the remaining words and training a multinomial Naive Bayes classifier. As seen in Figure FIGREF3 , we can far exceed the Majority baseline performance by incorporating basic language signals into machine learning algorithms (173% INLINEFORM0 improvement). We additionally explore the potential of improving our text classification task by applying a number of feature ranking methods and selecting varying proportions of top ranked features in an attempt to exclude noisy features. We start by ranking the different features, w, according to their Information Gain Ratio score (IGR) with respect to every industry, i, and training our classifier using different proportions of the top features. INLINEFORM0 INLINEFORM1 Even though we find that using the top 95% of all the features already exceeds the performance of the All Words model on the development data, we further experiment with ranking our features with a more aggressive formula that heavily promotes the features that are tightly associated with any industry category. Therefore, for every word in our training set, we define our newly introduced ranking method, the Aggressive Feature Ranking (AFR), as: INLINEFORM0 In Figure FIGREF3 we illustrate the performance of all four methods in our industry prediction task on the development data. Note that for each method, we provide both the accuracy ( INLINEFORM0 ) and the average per-class accuracy ( INLINEFORM1 ). The Majority and All Words methods apply to all the features; therefore, they are represented as a straight line in the figure. The IGR and AFR methods are applied to varying subsets of the features using a 5% step. Our experiments demonstrate that the word choice that the users make in their posts correlates with their industry. The first observation in Figure FIGREF3 is that the INLINEFORM0 is proportional to INLINEFORM1 ; as INLINEFORM2 increases, so does INLINEFORM3 . Secondly, the best result on the development set is achieved by using the top 90% of the features using the AFR method. Lastly, the improvements of the IGR and AFR feature selections are not substantially better in comparison to All Words (at most 5% improvement between All Words and AFR), which suggest that only a few noisy features exist and most of the words play some role in shaping the “language" of an industry. As a final evaluation, we apply on the test data the classifier found to work best on the development data (AFR feature selection, top 90% features), for an INLINEFORM0 of 0.534 and INLINEFORM1 of 0.477. Leveraging User Metadata Together with the industry information and the most recent postings of each blogger, we also download a number of accompanying profile elements. Using these additional elements, we explore the potential of incorporating users' metadata in our classifiers. Table TABREF7 shows the different user metadata we consider together with their coverage percentage (not all users provide a value for all of the profile elements). With the exception of the gender field, the remaining metadata elements shown in Table TABREF7 are completed by the users as a freely editable text field. This introduces a considerable amount of noise in the set of possible metadata values. Examples of noise in the occupation field include values such as “Retired”, “I work.”, or “momma” which are not necessarily informative for our industry prediction task. To examine whether the metadata fields can help in the prediction of a user's industry, we build classifiers using the different metadata elements. For each metadata element that has a textual value, we use all the words in the training set for that field as features. The only two exceptions are the state field, which is encoded as one feature that can take one out of 50 different values representing the 50 U.S. states; and the gender field, which is encoded as a feature with a distinct value for each user gender option: undefined, male, or female. As shown in Table TABREF9 , we build four different classifiers using the multinomial NB algorithm: Occu (which uses the words found in the occupation profile element), Intro (introduction), Inter (interests), and Gloc (combined gender, city, state). In general, all the metadata classifiers perform better than our majority baseline ( INLINEFORM0 of 18.88%). For the Gloc classifier, this result is in alignment with previous studies BIBREF24 . However, the only metadata classifier that outperforms the content classifier is the Occu classifier, which despite missing and noisy occupation values exceeds the content classifier's performance by an absolute 3.2%. To investigate the promise of combining the five different classifiers we have built so far, we calculate their inter-prediction agreement using Fleiss's Kappa BIBREF25 , as well as the lower prediction bounds using the double fault measure BIBREF26 . The Kappa values, presented in the lower left side of Table TABREF10 , express the classification agreement for categorical items, in this case the users' industry. Lower values, especially values below 30%, mean smaller agreement. Since all five classifiers have better-than-baseline accuracy, this low agreement suggests that their predictions could potentially be combined to achieve a better accumulated result. Moreover, the double fault measure values, which are presented in the top-right hand side of Table TABREF10 , express the proportion of test cases for which both of the two respective classifiers make false predictions, essentially providing the lowest error bound for the pairwise ensemble classifier performance. The lower those numbers are, the greater the accuracy potential of any meta-classification scheme that combines those classifiers. Once again, the low double fault measure values suggest potential gain from a combination of the base classifiers into an ensemble of models. After establishing the promise of creating an ensemble of classifiers, we implement two meta-classification approaches. First, we combine our classifiers using features concatenation (or early fusion). Starting with our content-based classifier (Text), we successively add the features derived from each metadata element. The results, both micro- and macro-accuracy, are presented in Table TABREF12 . Even though all these four feature concatenation ensembles outperform the content-based classifier in the development set, they fail to outperform the Occu classifier. Second, we explore the potential of using stacked generalization (or late fusion) BIBREF27 . The base classifiers, referred to as L0 classifiers, are trained on different folds of the training set and used to predict the class of the remaining instances. Those predictions are then used together with the true label of the training instances to train a second classifier, referred to as the L1 classifier: this L1 is used to produce the final prediction on both the development data and the test data. Traditionally, stacking uses different machine learning algorithms on the same training data. However in our case, we use the same algorithm (multinomial NB) on heterogeneous data (i.e., different types of data such as content, occupation, introduction, interests, gender, city and state) in order to exploit all available sources of information. The ensemble learning results on the development set are shown in Table TABREF12 . We notice a constant improvement for both metrics when adding more classifiers to our ensemble except for the Gloc classifier, which slightly reduces the performance. The best result is achieved using an ensemble of the Text, Occu, Intro, and Inter L0 classifiers; the respective performance on the test set is an INLINEFORM0 of 0.643 and an INLINEFORM1 of 0.564. Finally, we present in Figure FIGREF11 the prediction accuracy for the final classifier for each of the different industries in our test dataset. Evidently, some industries are easier to predict than others. For example, while the Real Estate and Religion industries achieve accuracy figures above 80%, other industries, such as the Banking industry, are predicted correctly in less than 17% of the time. Anecdotal evidence drawn from the examination of the confusion matrix does not encourage any strong association of the Banking class with any other. The misclassifications are roughly uniform across all other classes, suggesting that the users in the Banking industry use language in a non-distinguishing way. Qualitative Analysis In this section, we provide a qualitative analysis of the language of the different industries. Top-Ranked Words To conduct a qualitative exploration of which words indicate the industry of a user, Table TABREF14 shows the three top-ranking content words for the different industries using the AFR method. Not surprisingly, the top ranked words align well with what we would intuitively expect for each industry. Even though most of these words are potentially used by many users regardless of their industry in our dataset, they are still distinguished by the AFR method because of the different frequencies of these words in the text of each industry. Industry-specific Word Similarities Next, we examine how the meaning of a word is shaped by the context in which it is uttered. In particular, we qualitatively investigate how the speakers' industry affects meaning by learning vector-space representations of words that take into account such contextual information. To achieve this, we apply the contextualized word embeddings proposed by Bamman et al. Bamman14, which are based on an extension of the “skip-gram" language model BIBREF28 . In addition to learning a global representation for each word, these contextualized embeddings compute one deviation from the common word embedding representation for each contextual variable, in this case, an industry option. These deviations capture the terms' meaning variations (shifts in the INLINEFORM0 -dimensional space of the representations, where INLINEFORM1 in our experiments) in the text of the different industries, however all the embeddings are in the same vector space to allow for comparisons to one another. Using the word representations learned for each industry, we present in Table TABREF16 the terms in the Technology and the Tourism industries that have the highest cosine similarity with a job-related word, customers. Similarly, Table TABREF17 shows the words in the Environment and the Tourism industries that are closest in meaning to a general interest word, food. More examples are given in the Appendix SECREF8 . The terms that rank highest in each industry are noticeably different. For example, as seen in Table TABREF17 , while food in the Environment industry is similar to nutritionally and locally, in the Tourism industry the same word relates more to terms such as delicious and pastries. These results not only emphasize the existing differences in how people in different industries perceive certain terms, but they also demonstrate that those differences can effectively be captured in the resulting word embeddings. Emotional Orientation per Industry and Gender As a final analysis, we explore how words that are emotionally charged relate to different industries. To quantify the emotional orientation of a text, we use the Positive Emotion and Negative Emotion categories in the Linguistic Inquiry and Word Count (LIWC) dictionary BIBREF29 . The LIWC dictionary contains lists of words that have been shown to correlate with the psychological states of people that use them; for example, the Positive Emotion category contains words such as “happy,” “pretty,” and “good.” For the text of all the users in each industry we measure the frequencies of Positive Emotion and Negative Emotion words normalized by the text's length. Table TABREF20 presents the industries' ranking for both categories of words based on their relative frequencies in the text of each industry. We further perform a breakdown per-gender, where we once again calculate the proportion of emotionally charged words in each industry, but separately for each gender. We find that the industry rankings of the relative frequencies INLINEFORM0 of emotionally charged words for the two genders are statistically significantly correlated, which suggests that regardless of their gender, users use positive (or negative) words with a relative frequency that correlates with their industry. (In other words, even if e.g., Fashion has a larger number of women users, both men and women working in Fashion will tend to use more positive words than the corresponding gender in another industry with a larger number of men users such as Automotive.) Finally, motivated by previous findings of correlations between job satisfaction and gender dominance in the workplace BIBREF30 , we explore the relationship between the usage of Positive Emotion and Negative Emotion words and the gender dominance in an industry. Although we find that there are substantial gender imbalances in each industry (Appendix SECREF9 ), we did not find any statistically significant correlation between the gender dominance ratio in the different industries and the usage of positive (or negative) emotional words in either gender in our dataset. Conclusion In this paper, we examined the task of predicting a social media user's industry. We introduced an annotated dataset of over 20,000 blog users and applied a content-based classifier in conjunction with two feature selection methods for an overall accuracy of up to 0.534, which represents a large improvement over the majority class baseline of 0.188. We also demonstrated how the user metadata can be incorporated in our classifiers. Although concatenation of features drawn both from blog content and profile elements did not yield any clear improvements over the best individual classifiers, we found that stacking improves the prediction accuracy to an overall accuracy of 0.643, as measured on our test dataset. A more in-depth analysis showed that not all industries are equally easy to predict: while industries such as Real Estate and Religion are clearly distinguishable with accuracy figures over 0.80, others such as Banking are much harder to predict. Finally, we presented a qualitative analysis to provide some insights into the language of different industries, which highlighted differences in the top-ranked words in each industry, word semantic similarities, and the relative frequency of emotionally charged words. Acknowledgments This material is based in part upon work supported by the National Science Foundation (#1344257) and by the John Templeton Foundation (#48503). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the John Templeton Foundation. Additional Examples of Word Similarities Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How many users do they look at? Only give me the answer and do not output any other words.
Answer:
[ "22,880 users", "20,000" ]
276
5,172
single_doc_qa
a87aaf9a6be84a23a92c990ae3a14bde
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The irony is a kind of figurative language, which is widely used on social media BIBREF0 . The irony is defined as a clash between the intended meaning of a sentence and its literal meaning BIBREF1 . As an important aspect of language, irony plays an essential role in sentiment analysis BIBREF2 , BIBREF0 and opinion mining BIBREF3 , BIBREF4 . Although some previous studies focus on irony detection, little attention is paid to irony generation. As ironies can strengthen sentiments and express stronger emotions, we mainly focus on generating ironic sentences. Given a non-ironic sentence, we implement a neural network to transfer it to an ironic sentence and constrain the sentiment polarity of the two sentences to be the same. For example, the input is “I hate it when my plans get ruined" which is negative in sentiment polarity and the output should be ironic and negative in sentiment as well, such as “I like it when my plans get ruined". The speaker uses “like" to be ironic and express his or her negative sentiment. At the same time, our model can preserve contents which are irrelevant to sentiment polarity and irony. According to the categories mentioned in BIBREF5 , irony can be classified into 3 classes: verbal irony by means of a polarity contrast, the sentences containing expression whose polarity is inverted between the intended and the literal evaluation; other types of verbal irony, the sentences that show no polarity contrast between the literal and intended meaning but are still ironic; and situational irony, the sentences that describe situations that fail to meet some expectations. As ironies in the latter two categories are obscure and hard to understand, we decide to only focus on ironies in the first category in this work. For example, our work can be specifically described as: given a sentence “I hate to be ignored", we train our model to generate an ironic sentence such as “I love to be ignored". Although there is “love" in the generated sentence, the speaker still expresses his or her negative sentiment by irony. We also make some explorations in the transformation from ironic sentences to non-ironic sentences at the end of our work. Because of the lack of previous work and baselines on irony generation, we implement our model based on style transfer. Our work will not only provide the first large-scale irony dataset but also make our model as a benchmark for the irony generation. Recently, unsupervised style transfer becomes a very popular topic. Many state-of-the-art studies try to solve the task with sequence-to-sequence (seq2seq) framework. There are three main ways to build up models. The first is to learn a latent style-independent content representation and generate sentences with the content representation and another style BIBREF6 , BIBREF7 . The second is to directly transfer sentences from one style to another under the control of classifiers and reinforcement learning BIBREF8 . The third is to remove style attribute words from the input sentence and combine the remaining content with new style attribute words BIBREF9 , BIBREF10 . The first method usually obtains better performances via adversarial training with discriminators. The style-independent content representation, nevertheless, is not easily obtained BIBREF11 , which results in poor performances. The second method is suitable for complex styles which are difficult to model and describe. The model can learn the deep semantic features by itself but sometimes the model is sensitive to parameters and hard to train. The third method succeeds to preserve content but cannot work for some complex styles such as democratic and republican. Sentences with those styles usually do not have specific style attribute words. Unfortunately, due to the lack of large irony dataset and difficulties of modeling ironies, there has been little work trying to generate ironies based on seq2seq framework as far as we know. Inspired by methods for style transfer, we decide to implement a specifically designed model based on unsupervised style transfer to explore irony generation. In this paper, in order to address the lack of irony data, we first crawl over 2M tweets from twitter to build a dataset with 262,755 ironic and 112,330 non-ironic tweets. Then, due to the lack of parallel data, we propose a novel model to transfer non-ironic sentences to ironic sentences in an unsupervised way. As ironic style is hard to model and describe, we implement our model with the control of classifiers and reinforcement learning. Different from other studies in style transfer, the transformation from non-ironic to ironic sentences has to preserve sentiment polarity as mentioned above. Therefore, we not only design an irony reward to control the irony accuracy and implement denoising auto-encoder and back-translation to control content preservation but also design a sentiment reward to control sentiment preservation. Experimental results demonstrate that our model achieves a high irony accuracy with well-preserved sentiment and content. The contributions of our work are as follows: Related Work Style Transfer: As irony is a complicated style and hard to model with some specific style attribute words, we mainly focus on studies without editing style attribute words. Some studies are trying to disentangle style representation from content representation. In BIBREF12 , authors leverage adversarial networks to learn separate content representations and style representations. In BIBREF13 and BIBREF6 , researchers combine variational auto-encoders (VAEs) with style discriminators. However, some recent studies BIBREF11 reveal that the disentanglement of content and style representations may not be achieved in practice. Therefore, some other research studies BIBREF9 , BIBREF10 strive to separate content and style by removing stylistic words. Nonetheless, many non-ironic sentences do not have specific stylistic words and as a result, we find it difficult to transfer non-ironic sentences to ironic sentences through this way in practice. Besides, some other research studies do not disentangle style from content but directly learn representations of sentences. In BIBREF8 , authors propose a dual reinforcement learning framework without separating content and style representations. In BIBREF7 , researchers utilize a machine translation model to learn a sentence representation preserving the meaning of the sentence but reducing stylistic properties. In this method, the quality of generated sentences relies on the performance of classifiers to a large extent. Meanwhile, such models are usually sensitive to parameters and difficult to train. In contrast, we combine a pre-training process with reinforcement learning to build up a stable language model and design special rewards for our task. Irony Detection: With the development of social media, irony detection becomes a more important task. Methods for irony detection can be mainly divided into two categories: methods based on feature engineering and methods based on neural networks. As for methods based on feature engineering, In BIBREF1 , authors investigate pragmatic phenomena and various irony markers. In BIBREF14 , researchers leverage a combination of sentiment, distributional semantic and text surface features. Those models rely on hand-crafted features and are hard to implement. When it comes to methods based on neural networks, long short-term memory (LSTM) BIBREF15 network is widely used and is very efficient for irony detection. In BIBREF16 , a tweet is divided into two segments and a subtract layer is implemented to calculate the difference between two segments in order to determine whether the tweet is ironic. In BIBREF17 , authors utilize a recurrent neural network with Bi-LSTM and self-attention without hand-crafted features. In BIBREF18 , researchers propose a system based on a densely connected LSTM network. Our Dataset In this section, we describe how we build our dataset with tweets. First, we crawl over 2M tweets from twitter using GetOldTweets-python. We crawl English tweets from 04/09/2012 to /12/18/2018. We first remove all re-tweets and use langdetect to remove all non-English sentences. Then, we remove hashtags attached at the end of the tweets because they are usually not parts of sentences and will confuse our language model. After that, we utilize Ekphrasis to process tweets. We remove URLs and restore remaining hashtags, elongated words, repeated words, and all-capitalized words. To simplify our dataset, We replace all “ INLINEFORM0 money INLINEFORM1 " and “ INLINEFORM2 time INLINEFORM3 " tokens with “ INLINEFORM4 number INLINEFORM5 " token when using Ekphrasis. And we delete sentences whose lengths are less than 10 or greater than 40. In order to restore abbreviations, we download an abbreviation dictionary from webopedia and restore abbreviations to normal words or phrases according to the dictionary. Finally, we remove sentences which have more than two rare words (appearing less than three times) in order to constrain the size of vocabulary. Finally, we get 662,530 sentences after pre-processing. As neural networks are proved effective in irony detection, we decide to implement a neural classifier in order to classify the sentences into ironic and non-ironic sentences. However, the only high-quality irony dataset we can obtain is the dataset of Semeval-2018 Task 3 and the dataset is pretty small, which will cause overfitting to complex models. Therefore, we just implement a simple one-layer RNN with LSTM cell to classify pre-processed sentences into ironic sentences and non-ironic sentences because LSTM networks are widely used in irony detection. We train the model with the dataset of Semeval-2018 Task 3. After classification, we get 262,755 ironic sentences and 399,775 non-ironic sentences. According to our observation, not all non-ironic sentences are suitable to be transferred into ironic sentences. For example, “just hanging out . watching . is it monday yet" is hard to transfer because it does not have an explicit sentiment polarity. So we remove all interrogative sentences from the non-ironic sentences and only obtain the sentences which have words expressing strong sentiments. We evaluate the sentiment polarity of each word with TextBlob and we view those words with sentiment scores greater than 0.5 or less than -0.5 as words expressing strong sentiments. Finally, we build our irony dataset with 262,755 ironic sentences and 102,330 non-ironic sentences. [t] Irony Generation Algorithm INLINEFORM0 pre-train with auto-encoder Pre-train INLINEFORM1 , INLINEFORM2 with INLINEFORM3 using MLE based on Eq. EQREF16 Pre-train INLINEFORM4 , INLINEFORM5 with INLINEFORM6 using MLE based on Eq. EQREF17 INLINEFORM7 pre-train with back-translation Pre-train INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , INLINEFORM11 with INLINEFORM12 using MLE based on Eq. EQREF19 Pre-train INLINEFORM13 , INLINEFORM14 , INLINEFORM15 , INLINEFORM16 with INLINEFORM17 using MLE based on Eq. EQREF20 INLINEFORM0 train with RL each epoch e = 1, 2, ..., INLINEFORM1 INLINEFORM2 train non-irony2irony with RL INLINEFORM3 in N INLINEFORM4 update INLINEFORM5 , INLINEFORM6 , using INLINEFORM7 based on Eq. EQREF29 INLINEFORM8 back-translation INLINEFORM9 INLINEFORM10 INLINEFORM11 update INLINEFORM12 , INLINEFORM13 , INLINEFORM14 , INLINEFORM15 using MLE based on Eq. EQREF19 INLINEFORM16 train irony2non-irony with RL INLINEFORM17 in I INLINEFORM18 update INLINEFORM19 , INLINEFORM20 , using INLINEFORM21 similar to Eq. EQREF29 INLINEFORM22 back-translation INLINEFORM23 INLINEFORM24 INLINEFORM25 update INLINEFORM26 , INLINEFORM27 , INLINEFORM28 , INLINEFORM29 using MLE based on Eq. EQREF20 Our Method Given two non-parallel corpora: non-ironic corpus N={ INLINEFORM0 , INLINEFORM1 , ..., INLINEFORM2 } and ironic corpus I={ INLINEFORM3 , INLINEFORM4 , ..., INLINEFORM5 }, the goal of our irony generation model is to generate an ironic sentence from a non-ironic sentence while preserving the content and sentiment polarity of the source input sentence. We implement an encoder-decoder framework where two encoders are utilized to encode ironic sentences and non-ironic sentences respectively and two decoders are utilized to decode ironic sentences and non-ironic sentences from latent representations respectively. In order to enforce a shared latent space, we share two layers on both the encoder side and the decoder side. Our model architecture is illustrated in Figure FIGREF13 . We denote irony encoder as INLINEFORM6 , irony decoder as INLINEFORM7 and non-irony encoder as INLINEFORM8 , non-irony decoder as INLINEFORM9 . Their parameters are INLINEFORM10 , INLINEFORM11 , INLINEFORM12 and INLINEFORM13 . Our irony generation algorithm is shown in Algorithm SECREF3 . We first pre-train our model using denoising auto-encoder and back-translation to build up language models for both styles (section SECREF14 ). Then we implement reinforcement learning to train the model to transfer sentences from one style to another (section SECREF21 ). Meanwhile, to achieve content preservation, we utilize back-translation for one time in every INLINEFORM0 time steps. Pretraining In order to build up our language model and preserve the content, we apply the auto-encoder model. To prevent the model from simply copying the input sentence, we randomly add some noises in the input sentence. Specifically, for every word in the input sentence, there is 10% chance that we delete it, 10 % chance that we duplicate it, 10% chance that we swap it with the next word, or it remains unchanged. We first encode the input sentence INLINEFORM0 or INLINEFORM1 with respective encoder INLINEFORM2 or INLINEFORM3 to obtain its latent representation INLINEFORM4 or INLINEFORM5 and reconstruct the input sentence with the latent representation and respective decoder. So we can get the reconstruction loss for auto-encoder INLINEFORM6 : DISPLAYFORM0 DISPLAYFORM1 In addition to denoising auto-encoder, we implement back-translation BIBREF19 to generate a pseudo-parallel corpus. Suppose our model takes non-ironic sentence INLINEFORM0 as input. We first encode INLINEFORM1 with INLINEFORM2 to obtain its latent representation INLINEFORM3 and decode the latent representation with INLINEFORM4 to get a transferred sentence INLINEFORM5 . Then we encode INLINEFORM6 with INLINEFORM7 and decode its latent representation with INLINEFORM8 to reconstruct the original input sentence INLINEFORM9 . Therefore, our reconstruction loss for back-translation INLINEFORM10 : DISPLAYFORM0 And if our model takes ironic sentence INLINEFORM0 as input, we can get the reconstruction loss for back-translation as: DISPLAYFORM0 Reinforcement Learning Since the gold transferred result of input is unavailable, we cannot evaluate the quality of the generated sentence directly. Therefore, we implement reinforcement learning and elaborately design two rewards to describe the irony accuracy and sentiment preservation, respectively. A pre-trained binary irony classifier based on CNN BIBREF20 is used to evaluate how ironic a sentence is. We denote the parameter of the classifier as INLINEFORM0 and it is fixed during the training process. In order to facilitate the transformation, we design the irony reward as the difference between the irony score of the input sentence and that of the output sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our irony reward is defined as: DISPLAYFORM0 where INLINEFORM0 denotes ironic style and INLINEFORM1 is the probability of that a sentence INLINEFORM2 is ironic. To preserve the sentiment polarity of the input sentence, we also need to use classifiers to evaluate the sentiment polarity of the sentences. However, the sentiment analysis of ironic sentences and non-ironic sentences are different. In the case of figurative languages such as irony, sarcasm or metaphor, the sentiment polarity of the literal meaning may differ significantly from that of the intended figurative meaning BIBREF0 . As we aim to train our model to transfer sentences from non-ironic to ironic, using only one classifier is not enough. As a result, we implement two pre-trained sentiment classifiers for non-ironic sentences and ironic sentences respectively. We denote the parameter of the sentiment classifier for ironic sentences as INLINEFORM0 and that of the sentiment classifier for non-ironic sentences as INLINEFORM1 . A challenge, when we implement two classifiers to evaluate the sentiment polarity, is that the two classifiers trained with different datasets may have different distributions of scores. That means we cannot directly calculate the sentiment reward with scores applied by two classifiers. To alleviate this problem and standardize the prediction results of two classifiers, we set a threshold for each classifier and subtract the respective threshold from scores applied by the classifier to obtain the comparative sentiment polarity score. We get the optimal threshold by maximizing the ability of the classifier according to the distribution of our training data. We denote the threshold of ironic sentiment classifier as INLINEFORM0 and the threshold of non-ironic sentiment classifier as INLINEFORM1 . The standardized sentiment score is defined as INLINEFORM2 and INLINEFORM3 where INLINEFORM4 denotes the positive sentiment polarity and INLINEFORM5 is the probability of that a sentence is positive in sentiment polarity. As mentioned above, the input sentence and the generated sentence should express the same sentiment. For example, if we input a non-ironic sentence “I hate to be ignored" which is negative in sentiment polarity, the generated ironic sentence should be also negative, such as “I love to be ignored". To achieve sentiment preservation, we design the sentiment reward as that one minus the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. Formally, when we input a non-ironic sentence INLINEFORM0 and transfer it to an ironic sentence INLINEFORM1 , our sentiment reward is defined as: DISPLAYFORM0 To encourage our model to focus on both the irony accuracy and the sentiment preservation, we apply the harmonic mean of irony reward and sentiment reward: DISPLAYFORM0 Policy Gradient The policy gradient algorithm BIBREF21 is a simple but widely-used algorithm in reinforcement learning. It is used to maximize the expected reward INLINEFORM0 . The objective function to minimize is defined as: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 is the reward of INLINEFORM2 and INLINEFORM3 is the input size. Training Details INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 in our model are Transformers BIBREF22 with 4 layers and 2 shared layers. The word embeddings of 128 dimensions are learned during the training process. Our maximum sentence length is set as 40. The optimizer is Adam BIBREF23 and the learning rate is INLINEFORM4 . The batch size is 32 and harmonic weight INLINEFORM5 in Eq.9 is 0.5. We set the interval INLINEFORM6 as 200. The model is pre-trained for 6 epochs and trained for 15 epochs for reinforcement learning. Irony Classifier: We implement a CNN classifier trained with our irony dataset. All the CNN classifiers we utilize in this paper use the same parameters as BIBREF20 . Sentiment Classifier for Irony: We first implement a one-layer LSTM network to classify ironic sentences in our dataset into positive and negative ironies. The LSTM network is trained with the dataset of Semeval 2015 Task 11 BIBREF0 which is used for the sentiment analysis of figurative language in twitter. Then, we use the positive ironies and negative ironies to train the CNN sentiment classifier for irony. Sentiment Classifier for Non-irony: Similar to the training process of the sentiment classifier for irony, we first implement a one-layer LSTM network trained with the dataset for the sentiment analysis of common twitters to classify the non-ironies into positive and negative non-ironies. Then we use the positive and negative non-ironies to train the sentiment classifier for non-irony. Baselines We compare our model with the following state-of-art generative models: BackTrans BIBREF7 : In BIBREF7 , authors propose a model using machine translation in order to preserve the meaning of the sentence while reducing stylistic properties. Unpaired BIBREF10 : In BIBREF10 , researchers implement a method to remove emotional words and add desired sentiment controlled by reinforcement learning. CrossAlign BIBREF6 : In BIBREF6 , authors leverage refined alignment of latent representations to perform style transfer and a cross-aligned auto-encoder is implemented. CPTG BIBREF24 : An interpolated reconstruction loss is introduced in BIBREF24 and a discriminator is implemented to control attributes in this work. DualRL BIBREF8 : In BIBREF8 , researchers use two reinforcement rewards simultaneously to control style accuracy and content preservation. Evaluation Metrics In order to evaluate sentiment preservation, we use the absolute value of the difference between the standardized sentiment score of the input sentence and that of the generated sentence. We call the value as sentiment delta (senti delta). Besides, we report the sentiment accuracy (Senti ACC) which measures whether the output sentence has the same sentiment polarity as the input sentence based on our standardized sentiment classifiers. The BLEU score BIBREF25 between the input sentences and the output sentences is calculated to evaluate the content preservation performance. In order to evaluate the overall performance of different models, we also report the geometric mean (G2) and harmonic mean (H2) of the sentiment accuracy and the BLEU score. As for the irony accuracy, we only report it in human evaluation results because it is more accurate for the human to evaluate the quality of irony as it is very complicated. We first sample 50 non-ironic input sentences and their corresponding output sentences of different models. Then, we ask four annotators who are proficient in English to evaluate the qualities of the generated sentences of different models. They are required to rank the output sentences of our model and baselines from the best to the worst in terms of irony accuracy (Irony), Sentiment preservation (Senti) and content preservation (Content). The best output is ranked with 1 and the worst output is ranked with 6. That means that the smaller our human evaluation value is, the better the corresponding model is. Results and Discussions Table TABREF35 shows the automatic evaluation results of the models in the transformation from non-ironic sentences to ironic sentences. From the results, our model obtains the best result in sentiment delta. The DualRL model achieves the highest result in other metrics, but most of its outputs are the almost same as the input sentences. So it is reasonable that DualRL system outperforms ours in these metrics but it actually does not transfer the non-ironic sentences to ironic sentences at all. From this perspective, we cannot view DualRL as an effective model for irony generation. In contrast, our model gets results close to those of DualRL and obtains a balance between irony accuracy, sentiment preservation, and content preservation if we also consider the irony accuracy discussed below. And from human evaluation results shown in Table TABREF36 , our model gets the best average rank in irony accuracy. And as mentioned above, the DualRL model usually does not change the input sentence and outputs the same sentence. Therefore, it is reasonable that it obtains the best rank in sentiment and content preservation and ours is the second. However, it still demonstrates that our model, instead of changing nothing, transfers the style of the input sentence with content and sentiment preservation at the same time. Case Study In the section, we present some example outputs of different models. Table TABREF37 shows the results of the transformation from non-ironic sentences to ironic sentences. We can observe that: (1) The BackTrans system, the Unpaired system, the CrossAlign system and the CPTG system tends to generate sentences which are towards irony but do not preserve content. (2) The DualRL system preserves content and sentiment very well but even does not change the input sentence. (3) Our model considers both aspects and achieves a better balance among irony accuracy, sentiment and content preservation. Error Analysis Although our model outperforms other style transfer baselines according to automatic and human evaluation results, there are still some failure cases because irony generation is still a very challenging task. We would like to share the issues we meet during our experiments and our solutions to some of them in this section. No Change: As mentioned above, many style transfer models, such as DualRL, tend to make few changes to the input sentence and output the same sentence. Actually, this is a common issue for unsupervised style transfer systems and we also meet it during our experiments. The main reason for the issue is that rewards for content preservation are too prominent and rewards for style accuracy cannot work well. In contrast, in order to guarantee the readability and fluency of the output sentence, we also cannot emphasize too much on rewards for style accuracy because it may cause some other issues such as word repetition mentioned below. A method to solve the problem is tuning hyperparameters and this is also the method we implement in this work. As for content preservation, maybe MLE methods such as back-translation are not enough because they tend to force models to generate specific words. In the future, we should further design some more suitable methods to control content preservation for models without disentangling style and content representations, such as DualRL and ours. Word Repetition: During our experiments, we observe that some of the outputs prefer to repeat the same word as shown in Table TABREF38 . This is because reinforcement learning rewards encourage the model to generate words which can get high scores from classifiers and even back-translation cannot stop it. Our solution is that we can lower the probability of decoding a word in decoders if the word has been generated in the previous time steps during testing. We also try to implement this method during training time but obtain worse performances because it may limit the effects of training. Some previous studies utilize language models to control the fluency of the output sentence and we also try this method. Nonetheless, pre-training a language model with tweets and using it to generate rewards is difficult because tweets are more casual and have more noise. Rewards from that kind of language model are usually not accurate and may confuse the model. In the future, we should come up with better methods to model language fluency with the consideration of irony accuracy, sentiment and content preservation, especially for tweets. Improper Words: As ironic style is hard for our model to learn, it may generate some improper words which make the sentence strange. As the example shown in the Table TABREF38 , the sentiment word in the input sentence is “wonderful" and the model should change it into a negative word such as “sad" to make the output sentence ironic. However, the model changes “friday" and “fifa" which are not related to ironic styles. We have not found a very effective method to address this issue and maybe we should further explore stronger models to learn ironic styles better. Additional Experiments In this section, we describe some additional experiments on the transformation from ironic sentences to non-ironic sentences. Sometimes ironies are hard to understand and may cause misunderstanding, for which our task also explores the transformation from ironic sentences to non-ironic sentences. As shown in Table TABREF46 , we also conduct automatic evaluations and the conclusions are similar to those of the transformation from non-ironic sentences to ironic sentences. As for human evaluation results in Table TABREF47 , our model still can achieve the second-best results in sentiment and content preservation. Nevertheless, DualRL system and ours get poor performances in irony accuracy. The reason may be that the other four baselines tend to generate common and even not fluent sentences which are irrelevant to the input sentences and are hard to be identified as ironies. So annotators usually mark these output sentences as non-ironic sentences, which causes these models to obtain better performances than DualRL and ours but much poorer results in sentiment and content preservation. Some examples are shown in Table TABREF52 . Conclusion and Future Work In this paper, we first systematically define irony generation based on style transfer. Because of the lack of irony data, we make use of twitter and build a large-scale dataset. In order to control irony accuracy, sentiment preservation and content preservation at the same time, we also design a combination of rewards for reinforcement learning and incorporate reinforcement learning with a pre-training process. Experimental results demonstrate that our model outperforms other generative models and our rewards are effective. Although our model design is effective, there are still many errors and we systematically analyze them. In the future, we are interested in exploring these directions and our work may extend to other kinds of ironies which are more difficult to model. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the combination of rewards for reinforcement learning? Only give me the answer and do not output any other words.
Answer:
[ "irony accuracy, sentiment preservation", " irony accuracy and sentiment preservation" ]
276
5,932
single_doc_qa
bcfe9732083d403c938510829d65d65e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Produced by Sue Asscher The Witch of Atlas by Percy Bysshe Shelley TO MARY (ON HER OBJECTING TO THE FOLLOWING POEM, UPON THE SCORE OF ITS CONTAINING NO HUMAN INTEREST). 1. How, my dear Mary,--are you critic-bitten (For vipers kill, though dead) by some review, That you condemn these verses I have written, Because they tell no story, false or true? What, though no mice are caught by a young kitten, _5 May it not leap and play as grown cats do, Till its claws come? Prithee, for this one time, Content thee with a visionary rhyme. 2. What hand would crush the silken-winged fly, The youngest of inconstant April's minions, _10 Because it cannot climb the purest sky, Where the swan sings, amid the sun's dominions? Not thine. Thou knowest 'tis its doom to die, When Day shall hide within her twilight pinions The lucent eyes, and the eternal smile, _15 Serene as thine, which lent it life awhile. 3. To thy fair feet a winged Vision came, Whose date should have been longer than a day, And o'er thy head did beat its wings for fame, And in thy sight its fading plumes display; _20 The watery bow burned in the evening flame. But the shower fell, the swift Sun went his way-- And that is dead.--O, let me not believe That anything of mine is fit to live! 4. Wordsworth informs us he was nineteen years _25 Considering and retouching Peter Bell; Watering his laurels with the killing tears Of slow, dull care, so that their roots to Hell Might pierce, and their wide branches blot the spheres Of Heaven, with dewy leaves and flowers; this well _30 May be, for Heaven and Earth conspire to foil The over-busy gardener's blundering toil. 5. My Witch indeed is not so sweet a creature As Ruth or Lucy, whom his graceful praise Clothes for our grandsons--but she matches Peter, _35 Though he took nineteen years, and she three days In dressing. Light the vest of flowing metre She wears; he, proud as dandy with his stays, Has hung upon his wiry limbs a dress Like King Lear's 'looped and windowed raggedness.' _40 6. If you strip Peter, you will see a fellow Scorched by Hell's hyperequatorial climate Into a kind of a sulphureous yellow: A lean mark, hardly fit to fling a rhyme at; In shape a Scaramouch, in hue Othello. _45 If you unveil my Witch, no priest nor primate Can shrive you of that sin,--if sin there be In love, when it becomes idolatry. THE WITCH OF ATLAS. 1. Before those cruel Twins, whom at one birth Incestuous Change bore to her father Time, _50 Error and Truth, had hunted from the Earth All those bright natures which adorned its prime, And left us nothing to believe in, worth The pains of putting into learned rhyme, A lady-witch there lived on Atlas' mountain _55 Within a cavern, by a secret fountain. 2. Her mother was one of the Atlantides: The all-beholding Sun had ne'er beholden In his wide voyage o'er continents and seas So fair a creature, as she lay enfolden _60 In the warm shadow of her loveliness;-- He kissed her with his beams, and made all golden The chamber of gray rock in which she lay-- She, in that dream of joy, dissolved away. 3. 'Tis said, she first was changed into a vapour, _65 And then into a cloud, such clouds as flit, Like splendour-winged moths about a taper, Round the red west when the sun dies in it: And then into a meteor, such as caper On hill-tops when the moon is in a fit: _70 Then, into one of those mysterious stars Which hide themselves between the Earth and Mars. 4. Ten times the Mother of the Months had bent Her bow beside the folding-star, and bidden With that bright sign the billows to indent _75 The sea-deserted sand--like children chidden, At her command they ever came and went-- Since in that cave a dewy splendour hidden Took shape and motion: with the living form Of this embodied Power, the cave grew warm. _80 5. A lovely lady garmented in light From her own beauty--deep her eyes, as are Two openings of unfathomable night Seen through a Temple's cloven roof--her hair Dark--the dim brain whirls dizzy with delight. _85 Picturing her form; her soft smiles shone afar, And her low voice was heard like love, and drew All living things towards this wonder new. 6. And first the spotted cameleopard came, And then the wise and fearless elephant; _90 Then the sly serpent, in the golden flame Of his own volumes intervolved;--all gaunt And sanguine beasts her gentle looks made tame. They drank before her at her sacred fount; And every beast of beating heart grew bold, _95 Such gentleness and power even to behold. 7. The brinded lioness led forth her young, That she might teach them how they should forego Their inborn thirst of death; the pard unstrung His sinews at her feet, and sought to know _100 With looks whose motions spoke without a tongue How he might be as gentle as the doe. The magic circle of her voice and eyes All savage natures did imparadise. 8. And old Silenus, shaking a green stick _105 Of lilies, and the wood-gods in a crew Came, blithe, as in the olive copses thick Cicadae are, drunk with the noonday dew: And Dryope and Faunus followed quick, Teasing the God to sing them something new; _110 Till in this cave they found the lady lone, Sitting upon a seat of emerald stone. 9. And universal Pan, 'tis said, was there, And though none saw him,--through the adamant Of the deep mountains, through the trackless air, _115 And through those living spirits, like a want, He passed out of his everlasting lair Where the quick heart of the great world doth pant, And felt that wondrous lady all alone,-- And she felt him, upon her emerald throne. _120 10. And every nymph of stream and spreading tree, And every shepherdess of Ocean's flocks, Who drives her white waves over the green sea, And Ocean with the brine on his gray locks, And quaint Priapus with his company, _125 All came, much wondering how the enwombed rocks Could have brought forth so beautiful a birth;-- Her love subdued their wonder and their mirth. 11. The herdsmen and the mountain maidens came, And the rude kings of pastoral Garamant-- _130 Their spirits shook within them, as a flame Stirred by the air under a cavern gaunt: Pigmies, and Polyphemes, by many a name, Centaurs, and Satyrs, and such shapes as haunt Wet clefts,--and lumps neither alive nor dead, _135 Dog-headed, bosom-eyed, and bird-footed. 12. For she was beautiful--her beauty made The bright world dim, and everything beside Seemed like the fleeting image of a shade: No thought of living spirit could abide, _140 Which to her looks had ever been betrayed, On any object in the world so wide, On any hope within the circling skies, But on her form, and in her inmost eyes. 13. Which when the lady knew, she took her spindle _145 And twined three threads of fleecy mist, and three Long lines of light, such as the dawn may kindle The clouds and waves and mountains with; and she As many star-beams, ere their lamps could dwindle In the belated moon, wound skilfully; _150 And with these threads a subtle veil she wove-- A shadow for the splendour of her love. 14. The deep recesses of her odorous dwelling Were stored with magic treasures--sounds of air, Which had the power all spirits of compelling, _155 Folded in cells of crystal silence there; Such as we hear in youth, and think the feeling Will never die--yet ere we are aware, The feeling and the sound are fled and gone, And the regret they leave remains alone. _160 15. And there lay Visions swift, and sweet, and quaint, Each in its thin sheath, like a chrysalis, Some eager to burst forth, some weak and faint With the soft burthen of intensest bliss. It was its work to bear to many a saint _165 Whose heart adores the shrine which holiest is, Even Love's:--and others white, green, gray, and black, And of all shapes--and each was at her beck. 16. And odours in a kind of aviary Of ever-blooming Eden-trees she kept, _170 Clipped in a floating net, a love-sick Fairy Had woven from dew-beams while the moon yet slept; As bats at the wired window of a dairy, They beat their vans; and each was an adept, When loosed and missioned, making wings of winds, _175 To stir sweet thoughts or sad, in destined minds. 17. And liquors clear and sweet, whose healthful might Could medicine the sick soul to happy sleep, And change eternal death into a night Of glorious dreams--or if eyes needs must weep, _180 Could make their tears all wonder and delight, She in her crystal vials did closely keep: If men could drink of those clear vials, 'tis said The living were not envied of the dead. 18. Her cave was stored with scrolls of strange device, _185 The works of some Saturnian Archimage, Which taught the expiations at whose price Men from the Gods might win that happy age Too lightly lost, redeeming native vice; And which might quench the Earth-consuming rage _190 Of gold and blood--till men should live and move Harmonious as the sacred stars above; 19. And how all things that seem untameable, Not to be checked and not to be confined, Obey the spells of Wisdom's wizard skill; _195 Time, earth, and fire--the ocean and the wind, And all their shapes--and man's imperial will; And other scrolls whose writings did unbind The inmost lore of Love--let the profane Tremble to ask what secrets they contain. _200 20. And wondrous works of substances unknown, To which the enchantment of her father's power Had changed those ragged blocks of savage stone, Were heaped in the recesses of her bower; Carved lamps and chalices, and vials which shone _205 In their own golden beams--each like a flower, Out of whose depth a fire-fly shakes his light Under a cypress in a starless night. 21. At first she lived alone in this wild home, And her own thoughts were each a minister, _210 Clothing themselves, or with the ocean foam, Or with the wind, or with the speed of fire, To work whatever purposes might come Into her mind; such power her mighty Sire Had girt them with, whether to fly or run, _215 Through all the regions which he shines upon. 22. The Ocean-nymphs and Hamadryades, Oreads and Naiads, with long weedy locks, Offered to do her bidding through the seas, Under the earth, and in the hollow rocks, _220 And far beneath the matted roots of trees, And in the gnarled heart of stubborn oaks, So they might live for ever in the light Of her sweet presence--each a satellite. 23. 'This may not be,' the wizard maid replied; _225 'The fountains where the Naiades bedew Their shining hair, at length are drained and dried; The solid oaks forget their strength, and strew Their latest leaf upon the mountains wide; The boundless ocean like a drop of dew _230 Will be consumed--the stubborn centre must Be scattered, like a cloud of summer dust. 24. 'And ye with them will perish, one by one;-- If I must sigh to think that this shall be, If I must weep when the surviving Sun _235 Shall smile on your decay--oh, ask not me To love you till your little race is run; I cannot die as ye must--over me Your leaves shall glance--the streams in which ye dwell Shall be my paths henceforth, and so--farewell!'-- _240 25. She spoke and wept:--the dark and azure well Sparkled beneath the shower of her bright tears, And every little circlet where they fell Flung to the cavern-roof inconstant spheres And intertangled lines of light:--a knell _245 Of sobbing voices came upon her ears From those departing Forms, o'er the serene Of the white streams and of the forest green. 26. All day the wizard lady sate aloof, Spelling out scrolls of dread antiquity, _250 Under the cavern's fountain-lighted roof; Or broidering the pictured poesy Of some high tale upon her growing woof, Which the sweet splendour of her smiles could dye In hues outshining heaven--and ever she _255 Added some grace to the wrought poesy. 27. While on her hearth lay blazing many a piece Of sandal wood, rare gums, and cinnamon; Men scarcely know how beautiful fire is-- Each flame of it is as a precious stone _260 Dissolved in ever-moving light, and this Belongs to each and all who gaze upon. The Witch beheld it not, for in her hand She held a woof that dimmed the burning brand. 28. This lady never slept, but lay in trance _265 All night within the fountain--as in sleep. Its emerald crags glowed in her beauty's glance; Through the green splendour of the water deep She saw the constellations reel and dance Like fire-flies--and withal did ever keep _270 The tenour of her contemplations calm, With open eyes, closed feet, and folded palm. 29. And when the whirlwinds and the clouds descended From the white pinnacles of that cold hill, She passed at dewfall to a space extended, _275 Where in a lawn of flowering asphodel Amid a wood of pines and cedars blended, There yawned an inextinguishable well Of crimson fire--full even to the brim, And overflowing all the margin trim. _280 30. Within the which she lay when the fierce war Of wintry winds shook that innocuous liquor In many a mimic moon and bearded star O'er woods and lawns;--the serpent heard it flicker In sleep, and dreaming still, he crept afar-- _285 And when the windless snow descended thicker Than autumn leaves, she watched it as it came Melt on the surface of the level flame. 31. She had a boat, which some say Vulcan wrought For Venus, as the chariot of her star; _290 But it was found too feeble to be fraught With all the ardours in that sphere which are, And so she sold it, and Apollo bought And gave it to this daughter: from a car Changed to the fairest and the lightest boat _295 Which ever upon mortal stream did float. 32. And others say, that, when but three hours old, The first-born Love out of his cradle lept, And clove dun Chaos with his wings of gold, And like a horticultural adept, _300 Stole a strange seed, and wrapped it up in mould, And sowed it in his mother's star, and kept Watering it all the summer with sweet dew, And with his wings fanning it as it grew. 33. The plant grew strong and green, the snowy flower _305 Fell, and the long and gourd-like fruit began To turn the light and dew by inward power To its own substance; woven tracery ran Of light firm texture, ribbed and branching, o'er The solid rind, like a leaf's veined fan-- _310 Of which Love scooped this boat--and with soft motion Piloted it round the circumfluous ocean. 34. This boat she moored upon her fount, and lit A living spirit within all its frame, Breathing the soul of swiftness into it. _315 Couched on the fountain like a panther tame, One of the twain at Evan's feet that sit-- Or as on Vesta's sceptre a swift flame-- Or on blind Homer's heart a winged thought,-- In joyous expectation lay the boat. _320 35. Then by strange art she kneaded fire and snow Together, tempering the repugnant mass With liquid love--all things together grow Through which the harmony of love can pass; And a fair Shape out of her hands did flow-- _325 A living Image, which did far surpass In beauty that bright shape of vital stone Which drew the heart out of Pygmalion. 36. A sexless thing it was, and in its growth It seemed to have developed no defect _330 Of either sex, yet all the grace of both,-- In gentleness and strength its limbs were decked; The bosom swelled lightly with its full youth, The countenance was such as might select Some artist that his skill should never die, _335 Imaging forth such perfect purity. 37. From its smooth shoulders hung two rapid wings, Fit to have borne it to the seventh sphere, Tipped with the speed of liquid lightenings, Dyed in the ardours of the atmosphere: _340 She led her creature to the boiling springs Where the light boat was moored, and said: 'Sit here!' And pointed to the prow, and took her seat Beside the rudder, with opposing feet. 38. And down the streams which clove those mountains vast, _345 Around their inland islets, and amid The panther-peopled forests whose shade cast Darkness and odours, and a pleasure hid In melancholy gloom, the pinnace passed; By many a star-surrounded pyramid _350 Of icy crag cleaving the purple sky, And caverns yawning round unfathomably. 39. The silver noon into that winding dell, With slanted gleam athwart the forest tops, Tempered like golden evening, feebly fell; _355 A green and glowing light, like that which drops From folded lilies in which glow-worms dwell, When Earth over her face Night's mantle wraps; Between the severed mountains lay on high, Over the stream, a narrow rift of sky. _360 40. And ever as she went, the Image lay With folded wings and unawakened eyes; And o'er its gentle countenance did play The busy dreams, as thick as summer flies, Chasing the rapid smiles that would not stay, _365 And drinking the warm tears, and the sweet sighs Inhaling, which, with busy murmur vain, They had aroused from that full heart and brain. 41. And ever down the prone vale, like a cloud Upon a stream of wind, the pinnace went: _370 Now lingering on the pools, in which abode The calm and darkness of the deep content In which they paused; now o'er the shallow road Of white and dancing waters, all besprent With sand and polished pebbles:--mortal boat _375 In such a shallow rapid could not float. 42. And down the earthquaking cataracts which shiver Their snow-like waters into golden air, Or under chasms unfathomable ever Sepulchre them, till in their rage they tear _380 A subterranean portal for the river, It fled--the circling sunbows did upbear Its fall down the hoar precipice of spray, Lighting it far upon its lampless way. 43. And when the wizard lady would ascend _385 The labyrinths of some many-winding vale, Which to the inmost mountain upward tend-- She called 'Hermaphroditus!'--and the pale And heavy hue which slumber could extend Over its lips and eyes, as on the gale _390 A rapid shadow from a slope of grass, Into the darkness of the stream did pass. 44. And it unfurled its heaven-coloured pinions, With stars of fire spotting the stream below; And from above into the Sun's dominions _395 Flinging a glory, like the golden glow In which Spring clothes her emerald-winged minions, All interwoven with fine feathery snow And moonlight splendour of intensest rime, With which frost paints the pines in winter time. _400 45. And then it winnowed the Elysian air Which ever hung about that lady bright, With its aethereal vans--and speeding there, Like a star up the torrent of the night, Or a swift eagle in the morning glare _405 Breasting the whirlwind with impetuous flight, The pinnace, oared by those enchanted wings, Clove the fierce streams towards their upper springs. 46. The water flashed, like sunlight by the prow Of a noon-wandering meteor flung to Heaven; _410 The still air seemed as if its waves did flow In tempest down the mountains; loosely driven The lady's radiant hair streamed to and fro: Beneath, the billows having vainly striven Indignant and impetuous, roared to feel _415 The swift and steady motion of the keel. 47. Or, when the weary moon was in the wane, Or in the noon of interlunar night, The lady-witch in visions could not chain Her spirit; but sailed forth under the light _420 Of shooting stars, and bade extend amain Its storm-outspeeding wings, the Hermaphrodite; She to the Austral waters took her way, Beyond the fabulous Thamondocana,-- 48. Where, like a meadow which no scythe has shaven, _425 Which rain could never bend, or whirl-blast shake, With the Antarctic constellations paven, Canopus and his crew, lay the Austral lake-- There she would build herself a windless haven Out of the clouds whose moving turrets make _430 The bastions of the storm, when through the sky The spirits of the tempest thundered by: 49. A haven beneath whose translucent floor The tremulous stars sparkled unfathomably, And around which the solid vapours hoar, _435 Based on the level waters, to the sky Lifted their dreadful crags, and like a shore Of wintry mountains, inaccessibly Hemmed in with rifts and precipices gray, And hanging crags, many a cove and bay. _440 50. And whilst the outer lake beneath the lash Of the wind's scourge, foamed like a wounded thing, And the incessant hail with stony clash Ploughed up the waters, and the flagging wing Of the roused cormorant in the lightning flash _445 Looked like the wreck of some wind-wandering Fragment of inky thunder-smoke--this haven Was as a gem to copy Heaven engraven,-- 51. On which that lady played her many pranks, Circling the image of a shooting star, _450 Even as a tiger on Hydaspes' banks Outspeeds the antelopes which speediest are, In her light boat; and many quips and cranks She played upon the water, till the car Of the late moon, like a sick matron wan, _455 To journey from the misty east began. 52. And then she called out of the hollow turrets Of those high clouds, white, golden and vermilion, The armies of her ministering spirits-- In mighty legions, million after million, _460 They came, each troop emblazoning its merits On meteor flags; and many a proud pavilion Of the intertexture of the atmosphere They pitched upon the plain of the calm mere. 53. They framed the imperial tent of their great Queen _465 Of woven exhalations, underlaid With lambent lightning-fire, as may be seen A dome of thin and open ivory inlaid With crimson silk--cressets from the serene Hung there, and on the water for her tread _470 A tapestry of fleece-like mist was strewn, Dyed in the beams of the ascending moon. 54. And on a throne o'erlaid with starlight, caught Upon those wandering isles of aery dew, Which highest shoals of mountain shipwreck not, _475 She sate, and heard all that had happened new Between the earth and moon, since they had brought The last intelligence--and now she grew Pale as that moon, lost in the watery night-- And now she wept, and now she laughed outright. _480 55. These were tame pleasures; she would often climb The steepest ladder of the crudded rack Up to some beaked cape of cloud sublime, And like Arion on the dolphin's back Ride singing through the shoreless air;--oft-time _485 Following the serpent lightning's winding track, She ran upon the platforms of the wind, And laughed to hear the fire-balls roar behind. 56. And sometimes to those streams of upper air Which whirl the earth in its diurnal round, _490 She would ascend, and win the spirits there To let her join their chorus. Mortals found That on those days the sky was calm and fair, And mystic snatches of harmonious sound Wandered upon the earth where'er she passed, _495 And happy thoughts of hope, too sweet to last. 57. But her choice sport was, in the hours of sleep, To glide adown old Nilus, where he threads Egypt and Aethiopia, from the steep Of utmost Axume, until he spreads, _500 Like a calm flock of silver-fleeced sheep, His waters on the plain: and crested heads Of cities and proud temples gleam amid, And many a vapour-belted pyramid. 58. By Moeris and the Mareotid lakes, _505 Strewn with faint blooms like bridal chamber floors, Where naked boys bridling tame water-snakes, Or charioteering ghastly alligators, Had left on the sweet waters mighty wakes Of those huge forms--within the brazen doors _510 Of the great Labyrinth slept both boy and beast, Tired with the pomp of their Osirian feast. 59. And where within the surface of the river The shadows of the massy temples lie, And never are erased--but tremble ever _515 Like things which every cloud can doom to die, Through lotus-paven canals, and wheresoever The works of man pierced that serenest sky With tombs, and towers, and fanes, 'twas her delight To wander in the shadow of the night. _520 60. With motion like the spirit of that wind Whose soft step deepens slumber, her light feet Passed through the peopled haunts of humankind. Scattering sweet visions from her presence sweet, Through fane, and palace-court, and labyrinth mined _525 With many a dark and subterranean street Under the Nile, through chambers high and deep She passed, observing mortals in their sleep. 61. A pleasure sweet doubtless it was to see Mortals subdued in all the shapes of sleep. _530 Here lay two sister twins in infancy; There, a lone youth who in his dreams did weep; Within, two lovers linked innocently In their loose locks which over both did creep Like ivy from one stem;--and there lay calm _535 Old age with snow-bright hair and folded palm. 62. But other troubled forms of sleep she saw, Not to be mirrored in a holy song-- Distortions foul of supernatural awe, And pale imaginings of visioned wrong; _540 And all the code of Custom's lawless law Written upon the brows of old and young: 'This,' said the wizard maiden, 'is the strife Which stirs the liquid surface of man's life.' 63. And little did the sight disturb her soul.-- _545 We, the weak mariners of that wide lake Where'er its shores extend or billows roll, Our course unpiloted and starless make O'er its wild surface to an unknown goal:-- But she in the calm depths her way could take, _550 Where in bright bowers immortal forms abide Beneath the weltering of the restless tide. 64. And she saw princes couched under the glow Of sunlike gems; and round each temple-court In dormitories ranged, row after row, _555 She saw the priests asleep--all of one sort-- For all were educated to be so.-- The peasants in their huts, and in the port The sailors she saw cradled on the waves, And the dead lulled within their dreamless graves. _560 65. And all the forms in which those spirits lay Were to her sight like the diaphanous Veils, in which those sweet ladies oft array Their delicate limbs, who would conceal from us Only their scorn of all concealment: they _565 Move in the light of their own beauty thus. But these and all now lay with sleep upon them, And little thought a Witch was looking on them. 66. She, all those human figures breathing there, Beheld as living spirits--to her eyes _570 The naked beauty of the soul lay bare, And often through a rude and worn disguise She saw the inner form most bright and fair-- And then she had a charm of strange device, Which, murmured on mute lips with tender tone, _575 Could make that spirit mingle with her own. 67. Alas! Aurora, what wouldst thou have given For such a charm when Tithon became gray? Or how much, Venus, of thy silver heaven Wouldst thou have yielded, ere Proserpina _580 Had half (oh! why not all?) the debt forgiven Which dear Adonis had been doomed to pay, To any witch who would have taught you it? The Heliad doth not know its value yet. 68. 'Tis said in after times her spirit free _585 Knew what love was, and felt itself alone-- But holy Dian could not chaster be Before she stooped to kiss Endymion, Than now this lady--like a sexless bee Tasting all blossoms, and confined to none, _590 Among those mortal forms, the wizard-maiden Passed with an eye serene and heart unladen. 69. To those she saw most beautiful, she gave Strange panacea in a crystal bowl:-- They drank in their deep sleep of that sweet wave, _595 And lived thenceforward as if some control, Mightier than life, were in them; and the grave Of such, when death oppressed the weary soul, Was as a green and overarching bower Lit by the gems of many a starry flower. _600 70. For on the night when they were buried, she Restored the embalmers' ruining, and shook The light out of the funeral lamps, to be A mimic day within that deathy nook; And she unwound the woven imagery _605 Of second childhood's swaddling bands, and took The coffin, its last cradle, from its niche, And threw it with contempt into a ditch. 71. And there the body lay, age after age. Mute, breathing, beating, warm, and undecaying, _610 Like one asleep in a green hermitage, With gentle smiles about its eyelids playing, And living in its dreams beyond the rage Of death or life; while they were still arraying In liveries ever new, the rapid, blind _615 And fleeting generations of mankind. 72. And she would write strange dreams upon the brain Of those who were less beautiful, and make All harsh and crooked purposes more vain Than in the desert is the serpent's wake _620 Which the sand covers--all his evil gain The miser in such dreams would rise and shake Into a beggar's lap;--the lying scribe Would his own lies betray without a bribe. 73. The priests would write an explanation full, _625 Translating hieroglyphics into Greek, How the God Apis really was a bull, And nothing more; and bid the herald stick The same against the temple doors, and pull The old cant down; they licensed all to speak _630 Whate'er they thought of hawks, and cats, and geese, By pastoral letters to each diocese. 74. The king would dress an ape up in his crown And robes, and seat him on his glorious seat, And on the right hand of the sunlike throne _635 Would place a gaudy mock-bird to repeat The chatterings of the monkey.--Every one Of the prone courtiers crawled to kiss the feet Of their great Emperor, when the morning came, And kissed--alas, how many kiss the same! _640 75. The soldiers dreamed that they were blacksmiths, and Walked out of quarters in somnambulism; Round the red anvils you might see them stand Like Cyclopses in Vulcan's sooty abysm, Beating their swords to ploughshares;--in a band _645 The gaolers sent those of the liberal schism Free through the streets of Memphis, much, I wis, To the annoyance of king Amasis. 76. And timid lovers who had been so coy, They hardly knew whether they loved or not, _650 Would rise out of their rest, and take sweet joy, To the fulfilment of their inmost thought; And when next day the maiden and the boy Met one another, both, like sinners caught, Blushed at the thing which each believed was done _655 Only in fancy--till the tenth moon shone; 77. And then the Witch would let them take no ill: Of many thousand schemes which lovers find, The Witch found one,--and so they took their fill Of happiness in marriage warm and kind. _660 Friends who, by practice of some envious skill, Were torn apart--a wide wound, mind from mind!-- She did unite again with visions clear Of deep affection and of truth sincere. 80. These were the pranks she played among the cities _665 Of mortal men, and what she did to Sprites And Gods, entangling them in her sweet ditties To do her will, and show their subtle sleights, I will declare another time; for it is A tale more fit for the weird winter nights _670 Than for these garish summer days, when we Scarcely believe much more than we can see. End of Project Gutenberg's The Witch of Atlas, by Percy Bysshe Shelley Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Who does the Witch create? Only give me the answer and do not output any other words.
Answer:
[ "Hermaphroditus." ]
276
7,970
single_doc_qa
8f89510dff6347fe8a090476aaa2bef4
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction One of the most fundamental topics in natural language processing is how best to derive high-level representations from constituent parts, as natural language meanings are a function of their constituent parts. How best to construct a sentence representation from distributed word embeddings is an example domain of this larger issue. Even though sequential neural models such as recurrent neural networks (RNN) BIBREF0 and their variants including Long Short-Term Memory (LSTM) BIBREF1 and Gated Recurrent Unit (GRU) BIBREF2 have become the de-facto standard for condensing sentence-level information from a sequence of words into a fixed vector, there have been many lines of research towards better sentence representation using other neural architectures, e.g. convolutional neural networks (CNN) BIBREF3 or self-attention based models BIBREF4 . From a linguistic point of view, the underlying tree structure—as expressed by its constituency and dependency trees—of a sentence is an integral part of its meaning. Inspired by this fact, some recursive neural network (RvNN) models are designed to reflect the syntactic tree structure, achieving impressive results on several sentence-level tasks such as sentiment analysis BIBREF5 , BIBREF6 , machine translation BIBREF7 , natural language inference BIBREF8 , and discourse relation classification BIBREF9 . However, some recent works have BIBREF10 , BIBREF11 proposed latent tree models, which learn to construct task-specific tree structures without explicit supervision, bringing into question the value of linguistically-motivated recursive neural models. Witnessing the surprising performance of the latent tree models on some sentence-level tasks, there arises a natural question: Are linguistic tree structures the optimal way of composing sentence representations for NLP tasks? In this paper, we demonstrate that linguistic priors are in fact useful for devising effective neural models for sentence representations, showing that our novel architecture based on constituency trees and their tag information obtains superior performance on several sentence-level tasks, including sentiment analysis and natural language inference. A chief novelty of our approach is that we introduce a small separate tag-level tree-LSTM to control the composition function of the existing word-level tree-LSTM, which is in charge of extracting helpful syntactic signals for meaningful semantic composition of constituents by considering both the structures and linguistic tags of constituency trees simultaneously. In addition, we demonstrate that applying a typical LSTM to preprocess the leaf nodes of a tree-LSTM greatly improves the performance of the tree models. Moreover, we propose a clustered tag set to replace the existing tags on the assumption that the original syntactic tags are too fined-grained to be useful in neural models. In short, our contributions in this work are as follows: Related Work Recursive neural networks (RvNN) are a kind of neural architecture which model sentences by exploiting syntactic structure. While earlier RvNN models proposed utilizing diverse composition functions, including feed-forward neural networks BIBREF12 , matrix-vector multiplication BIBREF5 , and tensor computation BIBREF6 , tree-LSTMs BIBREF13 remain the standard for several sentence-level tasks. Even though classic RvNNs have demonstrated superior performance on a variety of tasks, their inflexibility, i.e. their inability to handle dynamic compositionality for different syntactic configurations, is a considerable weakness. For instance, it would be desirable if our model could distinguish e.g. adjective-noun composition from that of verb-noun or preposition-noun composition, as models failing to make such a distinction ignore real-world syntactic considerations such as `-arity' of function words (i.e. types), and the adjunct/argument distinction. To enable dynamic compositionality in recursive neural networks, many previous works BIBREF14 , BIBREF15 , BIBREF16 , BIBREF9 , BIBREF17 , BIBREF18 , BIBREF19 have proposed various methods. One main direction of research leverages tag information, which is produced as a by-product of parsing. In detail, BIBREF16 ( BIBREF16 ) suggested TG-RNN, a model employing different composition functions according to POS tags, and TE-RNN/TE-RNTN, models which leverage tag embeddings as additional inputs for the existing tree-structured models. Despite the novelty of utilizing tag information, the explosion of the number of parameters (in case of the TG-RNN) and the limited performance of the original models (in case of the TE-RNN/TE-RNTN) have prevented these models from being widely adopted. Meanwhile, BIBREF9 ( BIBREF9 ) and BIBREF18 ( BIBREF18 ) proposed models based on a tree-LSTM which also uses the tag vectors to control the gate functions of the tree-LSTM. In spite of their impressive results, there is a limitation that the trained tag embeddings are too simple to reflect the rich information which tags provide in different syntactic structures. To alleviate this problem, we introduce structure-aware tag representations in the next section. Another way of building dynamic compositionality into RvNNs is to take advantage of a meta-network (or hyper-network). Inspired by recent works on dynamic parameter prediction, DC-TreeLSTMs BIBREF17 dynamically create the parameters for compositional functions in a tree-LSTM. Specifically, the model has two separate tree-LSTM networks whose architectures are similar, but the smaller of the two is utilized to calculate the weights of the bigger one. A possible problem for this model is that it may be easy to be trained such that the role of each tree-LSTM is ambiguous, as they share the same input, i.e. word information. Therefore, we design two disentangled tree-LSTMs in our model so that one focuses on extracting useful features from only syntactic information while the other composes semantic units with the aid of the features. Furthermore, our model reduces the complexity of computation by utilizing typical tree-LSTM frameworks instead of computing the weights for each example. Finally, some recent works BIBREF10 , BIBREF11 have proposed latent tree-structured models that learn how to formulate tree structures from only sequences of tokens, without the aid of syntactic trees or linguistic information. The latent tree models have the advantage of being able to find the optimized task-specific order of composition rather than a sequential or syntactic one. In experiments, we compare our model with not only syntactic tree-based models but also latent tree models, demonstrating that modeling with explicit linguistic knowledge can be an attractive option. Model In this section, we introduce a novel RvNN architecture, called SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM). This model is similar to typical Tree-LSTMs, but provides dynamic compositionality by augmenting a separate tag-level tree-LSTM which produces structure-aware tag representations for each node in a tree. In other words, our model has two independent tree-structured modules based on the same constituency tree, one of which (word-level tree-LSTM) is responsible for constructing sentence representations given a sequence of words as usual, while the other (tag-level tree-LSTM) provides supplementary syntactic information to the former. In section 3.1, we first review tree-LSTM architectures. Then in section 3.2, we introduce a tag-level tree-LSTM and structure-aware tag representations. In section 3.3, we discuss an additional technique to boost the performance of tree-structured models, and in section 3.4, we describe the entire architecture of our model in detail. Tree-LSTM The LSTM BIBREF1 architecture was first introduced as an extension of the RNN architecture to mitigate the vanishing and exploding gradient problems. In addition, several works have discovered that applying the LSTM cell into tree structures can be an effective means of modeling sentence representations. To be formal, the composition function of the cell in a tree-LSTM can be formulated as follows: $$ \begin{bmatrix} \mathbf {i} \\ \mathbf {f}_l \\ \mathbf {f}_r \\ \mathbf {o} \\ \mathbf {g} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W} \begin{bmatrix} \mathbf {h}_l \\ \mathbf {h}_r \\ \end{bmatrix} + \mathbf {b} \Bigg )$$ (Eq. 8) $$ \mathbf {c} = \mathbf {f}_l \odot \mathbf {c}_l + \mathbf {f}_r \odot \mathbf {c}_r + \mathbf {i} \odot \mathbf {g}\\$$ (Eq. 9) where $\mathbf {h}, \mathbf {c} \in \mathbb {R}^{d}$ indicate the hidden state and cell state of the LSTM cell, and $\mathbf {h}_l, \mathbf {h}_r, \mathbf {c}_l, \mathbf {c}_r \in \mathbb {R}^{d}$ the hidden states and cell states of a left and right child. $\mathbf {g} \in \mathbb {R}^{d}$ is the newly composed input for the cell and $\mathbf {i}, \mathbf {f}_{l}, \mathbf {f}_{r}, \mathbf {o} \in \mathbb {R}^{d}$ represent an input gate, two forget gates (left, right), and an output gate respectively. $\mathbf {W} \in \mathbb {R}^{5d\times 2d}$ and $\mathbf {b} \in \mathbb {R}^{5d}$ are trainable parameters. $\sigma $ corresponds to the sigmoid function, $\tanh $ to the hyperbolic tangent, and $\odot $ to element-wise multiplication. Note the equations assume that there are only two children for each node, i.e. binary or binarized trees, following the standard in the literature. While RvNN models can be constructed on any tree structure, in this work we only consider constituency trees as inputs. In spite of the obvious upside that recursive models have in being so flexible, they are known for being difficult to fully utilize with batch computations as compared to other neural architectures because of the diversity of structure found across sentences. To alleviate this problem, BIBREF8 ( BIBREF8 ) proposed the SPINN model, which brings a shift-reduce algorithm to the tree-LSTM. As SPINN simplifies the process of constructing a tree into only two operations, i.e. shift and reduce, it can support more effective parallel computations while enjoying the advantages of tree structures. For efficiency, our model also starts from our own SPINN re-implementation, whose function is exactly the same as that of the tree-LSTM. Structure-aware Tag Representation In most previous works using linguistic tag information BIBREF16 , BIBREF9 , BIBREF18 , tags are usually represented as simple low-dimensional dense vectors, similar to word embeddings. This approach seems reasonable in the case of POS tags that are attached to the corresponding words, but phrase-level constituent tags (e.g. NP, VP, ADJP) vary greatly in size and shape, making them less amenable to uniform treatment. For instance, even the same phrase tags within different syntactic contexts can vary greatly in size and internal structure, as the case of NP tags in Figure 1 shows. Here, the NP consisting of DT[the]-NN[stories] has a different internal structure than the NP consisting of NP[the film 's]-NNS[shortcomings]. One way of deriving structure-aware tag representations from the original tag embeddings is to introduce a separate tag-level tree-LSTM which accepts the typical tag embeddings at each node of a tree and outputs the computed structure-aware tag representations for the nodes. Note that the module concentrates on extracting useful syntactic features by considering only the tags and structures of the trees, excluding word information. Formally, we denote a tag embedding for the tag attached to each node in a tree as $\textbf {e} \in \mathbb {R}^{d_\text{T}}$ . Then, the function of each cell in the tag tree-LSTM is defined in the following way. Leaf nodes are defined by the following: $$ \begin{bmatrix} \hat{\mathbf {c}} \\ \hat{\mathbf {h}} \\ \end{bmatrix} = \tanh {\left(\mathbf {U}_\text{T} \mathbf {e} + \mathbf {a}_\text{T}\right)}$$ (Eq. 13) while non-leaf nodes are defined by the following: $$ \begin{bmatrix} \hat{\mathbf {i}} \\ \hat{\mathbf {f}}_l \\ \hat{\mathbf {f}}_r \\ \hat{\mathbf {o}} \\ \hat{\mathbf {g}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W_\text{T}} \begin{bmatrix} \hat{\mathbf {h}}_l \\ \hat{\mathbf {h}}_r \\ \mathbf {e} \\ \end{bmatrix} + \mathbf {b}_\text{T} \Bigg )$$ (Eq. 14) $$ \hat{\mathbf {c}} = \hat{\mathbf {f}}_l \odot \hat{\mathbf {c}}_l + \hat{\mathbf {f}}_r \odot \hat{\mathbf {c}}_r + \hat{\mathbf {i}} \odot \hat{\mathbf {g}}\\$$ (Eq. 15) where $\hat{\mathbf {h}}, \hat{\mathbf {c}} \in \mathbb {R}^{d_\text{T}}$ represent the hidden state and cell state of each node in the tag tree-LSTM. We regard the hidden state ( $\hat{\mathbf {h}}$ ) as a structure-aware tag representation for the node. $ \mathbf {U}_\text{T} \in \mathbb {R}^{2d_\text{T} \times d_\text{T}}, \textbf {a}_\text{T} \in \mathbb {R}^{2d_\text{T}}, \mathbf {W}_\text{T} \in \mathbb {R}^{5d_\text{T} \times 3d_\text{T}}$ , and $\mathbf {b}_\text{T} \in \mathbb {R}^{5d_\text{T}}$ are trainable parameters. The rest of the notation follows equations 8 , 9 , and 10 . In case of leaf nodes, the states are computed by a simple non-linear transformation. Meanwhile, the composition function in a non-leaf node absorbs the tag embedding ( $\mathbf {e}$ ) as an additional input as well as the hidden states of the two children nodes. The benefit of revising tag representations according to the internal structure is that the derived embedding is a function of the corresponding makeup of the node, rather than a monolithic, categorical tag. With regard to the tags themselves, we conjecture that the taxonomy of the tags currently in use in many NLP systems is too complex to be utilized effectively in deep neural models, considering the specificity of many tag sets and the limited amount of data with which to train. Thus, we cluster POS (word-level) tags into 12 groups following the universal POS tagset BIBREF20 and phrase-level tags into 11 groups according to criteria analogous to the case of words, resulting in 23 tag categories in total. In this work, we use the revised coarse-grained tags instead of the original ones. For more details, we refer readers to the supplemental materials. Leaf-LSTM An inherent shortcoming of RvNNs relative to sequential models is that each intermediate representation in a tree is unaware of its external context until all the information is gathered together at the root node. In other words, each composition process is prone to be locally optimized rather than globally optimized. To mitigate this problem, we propose using a leaf-LSTM following the convention of some previous works BIBREF21 , BIBREF7 , BIBREF11 , which is a typical LSTM that accepts a sequence of words in order. Instead of leveraging word embeddings directly, we can use each hidden state and cell state of the leaf-LSTM as input tokens for leaf nodes in a tree-LSTM, anticipating the proper contextualization of the input sequence. Formally, we denote a sequence of words in an input sentence as $w_{1:n}$ ( $n$ : the length of the sentence), and the corresponding word embeddings as $\mathbf {x}_{1:n}$ . Then, the operation of the leaf-LSTM at time $t$ can be formulated as, $$ \begin{bmatrix} \tilde{\mathbf {i}} \\ \tilde{\mathbf {f}} \\ \tilde{\mathbf {o}} \\ \tilde{\mathbf {g}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \tanh \end{bmatrix} \Bigg ( \mathbf {W}_\text{L} \begin{bmatrix} \tilde{\mathbf {h}}_{t-1} \\ \mathbf {x}_t \\ \end{bmatrix} + \mathbf {b}_\text{L} \Bigg )$$ (Eq. 18) $$ \tilde{\mathbf {c}}_t = \tilde{\mathbf {f}} \odot \tilde{\mathbf {c}}_{t-1} + \tilde{\mathbf {i}} \odot \tilde{\mathbf {g}}\\$$ (Eq. 19) where $\mathbf {x}_t \in \mathbb {R}^{d_w}$ indicates an input word vector and $\tilde{\mathbf {h}}_t$ , $\tilde{\mathbf {c}}_t \in \mathbb {R}^{d_h}$ represent the hidden and cell state of the LSTM at time $t$ ( $\tilde{\mathbf {h}}_{t-1}$ corresponds to the hidden state at time $t$ -1). $\mathbf {W}_\text{L}$ and $\mathbf {b}_\text{L} $ are learnable parameters. The remaining notation follows that of the tree-LSTM above. In experiments, we demonstrate that introducing a leaf-LSTM fares better at processing the input words of a tree-LSTM compared to using a feed-forward neural network. We also explore the possibility of its bidirectional setting in ablation study. SATA Tree-LSTM In this section, we define SATA Tree-LSTM (Structure-Aware Tag Augmented Tree-LSTM, see Figure 2 ) which joins a tag-level tree-LSTM (section 3.2), a leaf-LSTM (section 3.3), and the original word tree-LSTM together. As above we denote a sequence of words in an input sentence as $w_{1:n}$ and the corresponding word embeddings as $\mathbf {x}_{1:n}$ . In addition, a tag embedding for the tag attached to each node in a tree is denoted by $\textbf {e} \in \mathbb {R}^{d_\text{T}}$ . Then, we derive the final sentence representation for the input sentence with our model in two steps. First, we compute structure-aware tag representations ( $\hat{\mathbf {h}}$ ) for each node of a tree using the tag tree-LSTM (the right side of Figure 2 ) as follows: $$ \begin{bmatrix} \hat{\mathbf {c}} \\ \hat{\mathbf {h}} \\ \end{bmatrix} = {\left\lbrace \begin{array}{ll} \text{Tag-Tree-LSTM}(\mathbf {e}) & \text{if a leaf node} \\ \text{Tag-Tree-LSTM}(\hat{\mathbf {h}}_l, \hat{\mathbf {h}}_r, \mathbf {e}) & \text{otherwise} \end{array}\right.}$$ (Eq. 23) where Tag-Tree-LSTM indicates the module we described in section 3.2. Second, we combine semantic units recursively on the word tree-LSTM in a bottom-up fashion. For leaf nodes, we leverage the Leaf-LSTM (the bottom-left of Figure 2 , explained in section 3.3) to compute $\tilde{\mathbf {c}}_{t}$ and $\tilde{\mathbf {h}}_{t}$ in sequential order, with the corresponding input $\mathbf {x}_t$ . $$ \begin{bmatrix} \tilde{\mathbf {c}}_{t} \\ \tilde{\mathbf {h}}_{t} \\ \end{bmatrix} = \text{Leaf-LSTM}(\tilde{\textbf {h}}_{t-1}, \textbf {x}_t)$$ (Eq. 24) Then, the $\tilde{\mathbf {c}}_{t}$ and $\tilde{\mathbf {h}}_{t}$ can be utilized as input tokens to the word tree-LSTM, with the left (right) child of the target node corresponding to the $t$ th word in the input sentence. $$ \begin{bmatrix} \check{\textbf {c}}_{\lbrace l, r\rbrace } \\ \check{\textbf {h}}_{\lbrace l, r\rbrace } \end{bmatrix} = \begin{bmatrix} \tilde{\textbf {c}}_{t} \\ \tilde{\textbf {h}}_{t} \end{bmatrix}$$ (Eq. 25) In the non-leaf node case, we calculate phrase representations for each node in the word tree-LSTM (the upper-left of Figure 2 ) recursively as follows: $$ \check{\mathbf {g}} = \tanh {\left( \mathbf {U}_\text{w} \begin{bmatrix} \check{\mathbf {h}}_l \\ \check{\mathbf {h}}_r \\ \end{bmatrix} + \mathbf {a}_\text{w} \right)}$$ (Eq. 26) $$ \begin{bmatrix} \check{\mathbf {i}} \\ \check{\mathbf {f}}_l \\ \check{\mathbf {f}}_r \\ \check{\mathbf {o}} \end{bmatrix} = \begin{bmatrix} \sigma \\ \sigma \\ \sigma \\ \sigma \end{bmatrix} \Bigg ( \mathbf {W_\text{w}} \begin{bmatrix} \check{\mathbf {h}}_l \\ \check{\mathbf {h}}_r \\ \hat{\mathbf {h}} \\ \end{bmatrix} + \mathbf {b}_\text{w} \Bigg )$$ (Eq. 27) where $\check{\mathbf {h}}$ , $\check{\mathbf {c}} \in \mathbb {R}^{d_h}$ represent the hidden and cell state of each node in the word tree-LSTM. $\mathbf {U}_\text{w} \in \mathbb {R}^{d_h \times 2d_h}$ , $\mathbf {W}_\text{w} \in \mathbb {R}^{4d_h \times \left(2d_h+d_\text{T}\right)}$ , $\mathbf {a}_\text{w} \in \mathbb {R}^{d_h}$ , $\mathbf {b}_\text{w} \in \mathbb {R}^{4d_h}$ are learned parameters. The remaining notation follows those of the previous sections. Note that the structure-aware tag representations ( $\hat{\mathbf {h}}$ ) are only utilized to control the gate functions of the word tree-LSTM in the form of additional inputs, and are not involved in the semantic composition ( $\check{\mathbf {g}}$ ) directly. Finally, the hidden state of the root node ( $\check{\mathbf {h}}_\text{root}$ ) in the word-level tree-LSTM becomes the final sentence representation of the input sentence. Quantitative Analysis One of the most basic approaches to evaluate a sentence encoder is to measure the classification performance with the sentence representations made by the encoder. Thus, we conduct experiments on the following five datasets. (Summary statistics for the datasets are reported in the supplemental materials.) MR: A group of movie reviews with binary (positive / negative) classes. BIBREF22 SST-2: Stanford Sentiment Treebank BIBREF6 . Similar to MR, but each review is provided in the form of a binary parse tree whose nodes are annotated with numeric sentiment values. For SST-2, we only consider binary (positive / negative) classes. SST-5: Identical to SST-2, but the reviews are grouped into fine-grained (very negative, negative, neutral, positive, very positive) classes. SUBJ: Sentences grouped as being either subjective or objective (binary classes). BIBREF23 TREC: A dataset which groups questions into six different question types (classes). BIBREF24 As a preprocessing step, we construct parse trees for the sentences in the datasets using the Stanford PCFG parser BIBREF25 . Because syntactic tags are by-products of constituency parsing, we do not need further preprocessing. To classify the sentence given our sentence representation ( $\check{\mathbf {h}}_\text{root}$ ), we use one fully-connected layer with a ReLU activation, followed by a softmax classifier. The final predicted probability distribution of the class $y$ given the sentence $w_{1:n}$ is defined as follows, $$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \check{\mathbf {h}}_\text{root}+ \mathbf {b}_\text{s})$$ (Eq. 37) $$p(y|w_{1:n}) = \text{softmax}(\mathbf {W}_\text{c}\mathbf {s} + \mathbf {b}_\text{c})$$ (Eq. 38) where $\textbf {s} \in \mathbb {R}^{d_\text{s}}$ is the computed task-specific sentence representation for the classifier, and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are trainable parameters. As an objective function, we use the cross entropy of the predicted and true class distributions. The results of the experiments on the five datasets are shown in table 1 . In this table, we report the test accuracy of our model and various other models on each dataset in terms of percentage. To consider the effects of random initialization, we report the best numbers obtained from each several runs with hyper-parameters fixed. Compared with the previous syntactic tree-based models as well as other neural models, our SATA Tree-LSTM shows superior or competitive performance on all tasks. Specifically, our model achieves new state-of-the-art results within the tree-structured model class on 4 out of 5 sentence classification tasks—SST-2, SST-5, MR, and TREC. The model shows its strength, in particular, when the datasets provide phrase-level supervision to facilitate tree structure learning (i.e. SST-2, SST-5). Moreover, the numbers we report for SST-5 and TREC are competitive to the existing state-of-the-art results including ones from structurally pre-trained models such as ELMo BIBREF26 , proving our model's superiority. Note that the SATA Tree-LSTM also outperforms the recent latent tree-based model, indicating that modeling a neural model with explicit linguistic knowledge can be an attractive option. On the other hand, a remaining concern is that our SATA Tree-LSTM is not robust to random seeds when the size of a dataset is relatively small, as tag embeddings are randomly initialized rather than relying on pre-trained ones in contrast with the case of words. From this observation, we could find out there needs a direction of research towards pre-trained tag embeddings. To estimate the performance of our model beyond the tasks requiring only one sentence at a time, we conduct an experiment on the Stanford Natural Language Inference BIBREF34 dataset, each example of which consists of two sentences, the premise and the hypothesis. Our objective given the data is to predict the correct relationship between the two sentences among three options— contradiction, neutral, or entailment. We use the siamese architecture to encode both the premise ( $p_{1:m}$ ) and hypothesis ( $h_{1:n}$ ) following the standard of sentence-encoding models in the literature. (Specifically, $p_{1:m}$ is encoded as $\check{\mathbf {h}}_\text{root}^p \in \mathbb {R}^{d_h}$ and $h_{1:n}$ is encoded as $\check{\mathbf {h}}_\text{root}^h \in \mathbb {R}^{d_h}$ with the same encoder.) Then, we leverage some heuristics BIBREF35 , followed by one fully-connected layer with a ReLU activation and a softmax classifier. Specifically, $$\mathbf {z} = \left[ \check{\mathbf {h}}_\text{root}^p; \check{\mathbf {h}}_\text{root}^h; | \check{\mathbf {h}}_\text{root}^p - \check{\mathbf {h}}_\text{root}^h |; \check{\mathbf {h}}_\text{root}^p \odot \check{\mathbf {h}}_\text{root}^h \right]$$ (Eq. 41) $$\mathbf {s} = \text{ReLU}(\mathbf {W}_\text{s} \mathbf {z} + \mathbf {b}_\text{s})$$ (Eq. 42) where $\textbf {z} \in \mathbb {R}^{4d_h}$ , $\textbf {s} \in \mathbb {R}^{d_s}$ are intermediate features for the classifier and $\textbf {W}_\text{s} \in \mathbb {R}^{d_\text{s} \times 4d_h}$ , $\textbf {W}_\text{c} \in \mathbb {R}^{d_\text{c} \times d_s}$ , $\textbf {b}_\text{s} \in \mathbb {R}^{d_s}$ , $\textbf {b}_\text{c} \in \mathbb {R}^{d_c}$ are again trainable parameters. Our experimental results on the SNLI dataset are shown in table 2 . In this table, we report the test accuracy and number of trainable parameters for each model. Our SATA-LSTM again demonstrates its decent performance compared against the neural models built on both syntactic trees and latent trees, as well as the non-tree models. (Latent Syntax Tree-LSTM: BIBREF10 ( BIBREF10 ), Tree-based CNN: BIBREF35 ( BIBREF35 ), Gumbel Tree-LSTM: BIBREF11 ( BIBREF11 ), NSE: BIBREF36 ( BIBREF36 ), Reinforced Self-Attention Network: BIBREF4 ( BIBREF4 ), Residual stacked encoders: BIBREF37 ( BIBREF37 ), BiLSTM with generalized pooling: BIBREF38 ( BIBREF38 ).) Note that the number of learned parameters in our model is also comparable to other sophisticated models, showing the efficiency of our model. Even though our model has proven its mettle, the effect of tag information seems relatively weak in the case of SNLI, which contains a large amount of data compared to the others. One possible explanation is that neural models may learn some syntactic rules from large amounts of text when the text size is large enough, reducing the necessity of external linguistic knowledge. We leave the exploration of the effectiveness of tags relative to data size for future work. Here we go over the settings common across our models during experimentation. For more task-specific details, refer to the supplemental materials. For our input embeddings, we used 300 dimensional 840B GloVe BIBREF39 as pre-trained word embeddings, and tag representations were randomly sampled from the uniform distribution [-0.005, 0.005]. Tag vectors are revised during training while the fine-tuning of the word embedding depends on the task. Our models were trained using the Adam BIBREF40 or Adadelta BIBREF41 optimizer, depending on task. For regularization, weight decay is added to the loss function except for SNLI following BIBREF42 ( BIBREF42 ) and Dropout BIBREF43 is also applied for the word embeddings and task-specific classifiers. Moreover, batch normalization BIBREF44 is adopted for the classifiers. As a default, all the weights in the model are initialized following BIBREF45 ( BIBREF45 ) and the biases are set to 0. The total norm of the gradients of the parameters is clipped not to be over 5 during training. Our best models for each dataset were chosen by validation accuracy in cases where a validation set was provided as a part of the dataset. Otherwise, we perform a grid search on probable hyper-parameter settings, or run 10-fold cross-validation in cases where even a test set does not exist. Ablation Study In this section, we design an ablation study on the core modules of our model to explore their effectiveness. The dataset used in this experiment is SST-2. To conduct the experiment, we only replace the target module with other candidates while maintaining the other settings. To be specific, we focus on two modules, the leaf-LSTM and structure-aware tag embeddings (tag-level tree-LSTM). In the first case, the leaf-LSTM is replaced with a fully-connected layer with a $\tanh $ activation or Bi-LSTM. In the second case, we replace the structure-aware tag embeddings with naive tag embeddings or do not employ them at all. The experimental results are depicted in Figure 3 . As the chart shows, our model outperforms all the other options we have considered. In detail, the left part of the chart shows that the leaf-LSTM is the most effective option compared to its competitors. Note that the sequential leaf-LSTM is somewhat superior or competitive than the bidirectional leaf-LSTM when both have a comparable number of parameters. We conjecture this may because a backward LSTM does not add additional useful knowledge when the structure of a sentence is already known. In conclusion, we use the uni-directional LSTM as a leaf module because of its simplicity and remarkable performance. Meanwhile, the right part of the figure demonstrates that our newly introduced structure-aware embeddings have a real impact on improving the model performance. Interestingly, employing the naive tag embeddings made no difference in terms of the test accuracy, even though the absolute validation accuracy increased (not reported in the figure). This result supports our assumption that tag information should be considered in the structure. Qualitative Analysis In previous sections, we have numerically demonstrated that our model is effective in encouraging useful composition of semantic units. Here, we directly investigate the computed representations for each node of a tree, showing that the remarkable performance of our model is mainly due to the gradual and recursive composition of the intermediate representations on the syntactic structure. To observe the phrase-level embeddings at a glance, we draw a scatter plot in which a point represents the corresponding intermediate representation. We utilize PCA (Principal Component Analysis) to project the representations into a two-dimensional vector space. As a target parse tree, we reuse the one seen in Figure 1 . The result is shown in Figure 4 . From this figure, we confirm that the intermediate representations have a hierarchy in the semantic space, which is very similar to that of the parse tree. In other words, as many tree-structured models pursue, we can see the tendency of constructing the representations from the low-level (the bottom of the figure) to the high-level (the top-left and top-right of the figure), integrating the meaning of the constituents recursively. An interesting thing to note is that the final sentence representation is near that of the phrase `, the stories are quietly moving.' rather than that of `Despite the film's shortcomings', catching the main meaning of the sentence. Conclusion We have proposed a novel RvNN architecture to fully utilize linguistic priors. A newly introduced tag-level tree-LSTM demonstrates that it can effectively control the composition function of the corresponding word-level tree-LSTM. In addition, the proper contextualization of the input word vectors results in significant performance improvements on several sentence-level tasks. For future work, we plan to explore a new way of exploiting dependency trees effectively, similar to the case of constituency trees. Acknowledgments We thank anonymous reviewers for their constructive and fruitful comments. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF2016M3C4A7952587). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which baselines did they compare against? Only give me the answer and do not output any other words.
Answer:
[ "Various tree structured neural networks including variants of Tree-LSTM, Tree-based CNN, RNTN, and non-tree models including variants of LSTMs, CNNs, residual, and self-attention based networks", "Sentence classification baselines: RNTN (Socher et al. 2013), AdaMC-RNTN (Dong et al. 2014), TE-RNTN (Qian et al. 2015), TBCNN (Mou et al. 2015), Tree-LSTM (Tai, Socher, and Manning 2015), AdaHT-LSTM-CM (Liu, Qiu, and Huang 2017), DC-TreeLSTM (Liu, Qiu, and Huang 2017), TE-LSTM (Huang, Qian, and Zhu 2017), BiConTree (Teng and Zhang 2017), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), TreeNet (Cheng et al. 2018), CNN (Kim 2014), AdaSent (Zhao, Lu, and Poupart 2015), LSTM-CNN (Zhou et al. 2016), byte-mLSTM (Radford, Jozefowicz, and Sutskever 2017), BCN + Char + CoVe (McCann et al. 2017), BCN + Char + ELMo (Peters et al. 2018). \nStanford Natural Language Inference baselines: Latent Syntax Tree-LSTM (Yogatama et al. 2017), Tree-based CNN (Mou et al. 2016), Gumbel Tree-LSTM (Choi, Yoo, and Lee 2018), NSE (Munkhdalai and Yu 2017), Reinforced Self- Attention Network (Shen et al. 2018), Residual stacked encoders: (Nie and Bansal 2017), BiLSTM with generalized pooling (Chen, Ling, and Zhu 2018)." ]
276
8,032
single_doc_qa
0a646e21cc7e4537964a57e6bb2dc65f
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Grammar induction is the task of inducing hierarchical syntactic structure from data. Statistical approaches to grammar induction require specifying a probabilistic grammar (e.g. formalism, number and shape of rules), and fitting its parameters through optimization. Early work found that it was difficult to induce probabilistic context-free grammars (PCFG) from natural language data through direct methods, such as optimizing the log likelihood with the EM algorithm BIBREF0 , BIBREF1 . While the reasons for the failure are manifold and not completely understood, two major potential causes are the ill-behaved optimization landscape and the overly strict independence assumptions of PCFGs. More successful approaches to grammar induction have thus resorted to carefully-crafted auxiliary objectives BIBREF2 , priors or non-parametric models BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , and manually-engineered features BIBREF7 , BIBREF8 to encourage the desired structures to emerge. We revisit these aforementioned issues in light of advances in model parameterization and inference. First, contrary to common wisdom, we find that parameterizing a PCFG's rule probabilities with neural networks over distributed representations makes it possible to induce linguistically meaningful grammars by simply optimizing log likelihood. While the optimization problem remains non-convex, recent work suggests that there are optimization benefits afforded by over-parameterized models BIBREF9 , BIBREF10 , BIBREF11 , and we indeed find that this neural PCFG is significantly easier to optimize than the traditional PCFG. Second, this factored parameterization makes it straightforward to incorporate side information into rule probabilities through a sentence-level continuous latent vector, which effectively allows different contexts in a derivation to coordinate. In this compound PCFG—continuous mixture of PCFGs—the context-free assumptions hold conditioned on the latent vector but not unconditionally, thereby obtaining longer-range dependencies within a tree-based generative process. To utilize this approach, we need to efficiently optimize the log marginal likelihood of observed sentences. While compound PCFGs break efficient inference, if the latent vector is known the distribution over trees reduces to a standard PCFG. This property allows us to perform grammar induction using a collapsed approach where the latent trees are marginalized out exactly with dynamic programming. To handle the latent vector, we employ standard amortized inference using reparameterized samples from a variational posterior approximated from an inference network BIBREF12 , BIBREF13 . On standard benchmarks for English and Chinese, the proposed approach is found to perform favorably against recent neural network-based approaches to grammar induction BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 . Probabilistic Context-Free Grammars We consider context-free grammars (CFG) consisting of a 5-tuple INLINEFORM0 where INLINEFORM1 is the distinguished start symbol, INLINEFORM2 is a finite set of nonterminals, INLINEFORM3 is a finite set of preterminals, INLINEFORM6 is a finite set of terminal symbols, and INLINEFORM7 is a finite set of rules of the form, INLINEFORM0 A probabilistic context-free grammar (PCFG) consists of a grammar INLINEFORM0 and rule probabilities INLINEFORM1 such that INLINEFORM2 is the probability of the rule INLINEFORM3 . Letting INLINEFORM4 be the set of all parse trees of INLINEFORM5 , a PCFG defines a probability distribution over INLINEFORM6 via INLINEFORM7 where INLINEFORM8 is the set of rules used in the derivation of INLINEFORM9 . It also defines a distribution over string of terminals INLINEFORM10 via INLINEFORM0 where INLINEFORM0 , i.e. the set of trees INLINEFORM1 such that INLINEFORM2 's leaves are INLINEFORM3 . We will slightly abuse notation and use INLINEFORM0 to denote the posterior distribution over the unobserved latent trees given the observed sentence INLINEFORM0 , where INLINEFORM1 is the indicator function. Compound PCFGs A compound probability distribution BIBREF19 is a distribution whose parameters are themselves random variables. These distributions generalize mixture models to the continuous case, for example in factor analysis which assumes the following generative process, INLINEFORM0 Compound distributions provide the ability to model rich generative processes, but marginalizing over the latent parameter can be computationally intractable unless conjugacy can be exploited. In this work, we study compound probabilistic context-free grammars whose distribution over trees arises from the following generative process: we first obtain rule probabilities via INLINEFORM0 where INLINEFORM0 is a prior with parameters INLINEFORM1 (spherical Gaussian in this paper), and INLINEFORM2 is a neural network that concatenates the input symbol embeddings with INLINEFORM3 and outputs the sentence-level rule probabilities INLINEFORM4 , INLINEFORM0 where INLINEFORM0 denotes vector concatenation. Then a tree/sentence is sampled from a PCFG with rule probabilities given by INLINEFORM1 , INLINEFORM0 This can be viewed as a continuous mixture of PCFGs, or alternatively, a Bayesian PCFG with a prior on sentence-level rule probabilities parameterized by INLINEFORM0 . Importantly, under this generative model the context-free assumptions hold conditioned on INLINEFORM3 , but they do not hold unconditionally. This is shown in Figure FIGREF3 (right) where there is a dependence path through INLINEFORM4 if it is not conditioned upon. Compound PCFGs give rise to a marginal distribution over parse trees INLINEFORM5 via INLINEFORM0 where INLINEFORM0 . The subscript in INLINEFORM1 denotes the fact that the rule probabilities depend on INLINEFORM2 . Compound PCFGs are clearly more expressive than PCFGs as each sentence has its own set of rule probabilities. However, it still assumes a tree-based generative process, making it possible to learn latent tree structures. Our motivation for the compound PCFG is based on the observation that for grammar induction, context-free assumptions are generally made not because they represent an adequate model of natural language, but because they allow for tractable training. We can in principle model richer dependencies through vertical/horizontal Markovization BIBREF21 , BIBREF22 and lexicalization BIBREF23 . However such dependencies complicate training due to the rapid increase in the number of rules. Under this view, we can interpret the compound PCFG as a restricted version of some lexicalized, higher-order PCFG where a child can depend on structural and lexical context through a shared latent vector. We hypothesize that this dependence among siblings is especially useful in grammar induction from words, where (for example) if we know that watched is used as a verb then the noun phrase is likely to be a movie. In contrast to the usual Bayesian treatment of PCFGs which places priors on global rule probabilities BIBREF3 , BIBREF4 , BIBREF6 , the compound PCFG assumes a prior on local, sentence-level rule probabilities. It is therefore closely related to the Bayesian grammars studied by BIBREF25 and BIBREF26 , who also sample local rule probabilities from a logistic normal prior for training dependency models with valence (DMV) BIBREF27 . Experimental Setup Results and Discussion Table TABREF23 shows the unlabeled INLINEFORM0 scores for our models and various baselines. All models soundly outperform right branching baselines, and we find that the neural PCFG/compound PCFG are strong models for grammar induction. In particular the compound PCFG outperforms other models by an appreciable margin on both English and Chinese. We again note that we were unable to induce meaningful grammars through a traditional PCFG with the scalar parameterization despite a thorough hyperparameter search. See lab:full for the full results (including corpus-level INLINEFORM1 ) broken down by sentence length. Table TABREF27 analyzes the learned tree structures. We compare similarity as measured by INLINEFORM0 against gold, left, right, and “self" trees (top), where self INLINEFORM1 score is calculated by averaging over all 6 pairs obtained from 4 different runs. We find that PRPN is particularly consistent across multiple runs. We also observe that different models are better at identifying different constituent labels, as measured by label recall (Table TABREF27 , bottom). While left as future work, this naturally suggests an ensemble approach wherein the empirical probabilities of constituents (obtained by averaging the predicted binary constituent labels from the different models) are used either to supervise another model or directly as potentials in a CRF constituency parser. Finally, all models seemed to have some difficulty in identifying SBAR/VP constituents which typically span more words than NP constituents. Related Work Grammar induction has a long and rich history in natural language processing. Early work on grammar induction with pure unsupervised learning was mostly negative BIBREF0 , BIBREF1 , BIBREF74 , though BIBREF75 reported some success on partially bracketed data. BIBREF76 and BIBREF2 were some of the first successful statistical approaches to grammar induction. In particular, the constituent-context model (CCM) of BIBREF2 , which explicitly models both constituents and distituents, was the basis for much subsequent work BIBREF27 , BIBREF7 , BIBREF8 . Other works have explored imposing inductive biases through Bayesian priors BIBREF4 , BIBREF5 , BIBREF6 , modified objectives BIBREF42 , and additional constraints on recursion depth BIBREF77 , BIBREF48 . While the framework of specifying the structure of a grammar and learning the parameters is common, other methods exist. BIBREF43 consider a nonparametric-style approach to unsupervised parsing by using random subsets of training subtrees to parse new sentences. BIBREF46 utilize an incremental algorithm to unsupervised parsing which makes local decisions to create constituents based on a complex set of heuristics. BIBREF47 induce parse trees through cascaded applications of finite state models. More recently, neural network-based approaches to grammar induction have shown promising results on inducing parse trees directly from words. BIBREF14 , BIBREF15 learn tree structures through soft gating layers within neural language models, while BIBREF16 combine recursive autoencoders with the inside-outside algorithm. BIBREF17 train unsupervised recurrent neural network grammars with a structured inference network to induce latent trees, and BIBREF78 utilize image captions to identify and ground constituents. Our work is also related to latent variable PCFGs BIBREF79 , BIBREF80 , BIBREF81 , which extend PCFGs to the latent variable setting by splitting nonterminal symbols into latent subsymbols. In particular, latent vector grammars BIBREF82 and compositional vector grammars BIBREF83 also employ continuous vectors within their grammars. However these approaches have been employed for learning supervised parsers on annotated treebanks, in contrast to the unsupervised setting of the current work. Conclusion This work explores grammar induction with compound PCFGs, which modulate rule probabilities with per-sentence continuous latent vectors. The latent vector induces marginal dependencies beyond the traditional first-order context-free assumptions within a tree-based generative process, leading to improved performance. The collapsed amortized variational inference approach is general and can be used for generative models which admit tractable inference through partial conditioning. Learning deep generative models which exhibit such conditional Markov properties is an interesting direction for future work. Acknowledgments We thank Phil Blunsom for initial discussions which seeded many of the core ideas in the present work. We also thank Yonatan Belinkov and Shay Cohen for helpful feedback, and Andrew Drozdov for providing the parsed dataset from their DIORA model. YK is supported by a Google Fellowship. AMR acknowledges the support of NSF 1704834, 1845664, AWS, and Oracle. Model Parameterization We associate an input embedding INLINEFORM0 for each symbol INLINEFORM1 on the left side of a rule (i.e. INLINEFORM2 ) and run a neural network over INLINEFORM3 to obtain the rule probabilities. Concretely, each rule type INLINEFORM4 is parameterized as follows, INLINEFORM5 where INLINEFORM0 is the product space INLINEFORM1 , and INLINEFORM2 are MLPs with two residual layers, INLINEFORM3 The bias terms for the above expressions (including for the rule probabilities) are omitted for notational brevity. In Figure FIGREF3 we use the following to refer to rule probabilities of different rule types, INLINEFORM0 where INLINEFORM0 denotes the set of rules with INLINEFORM1 on the left hand side. The compound PCFG rule probabilities INLINEFORM0 given a latent vector INLINEFORM1 , INLINEFORM2 Again the bias terms are omitted for brevity, and INLINEFORM0 are as before where the first layer's input dimensions are appropriately changed to account for concatenation with INLINEFORM1 . Corpus/Sentence F 1 F_1 by Sentence Length For completeness we show the corpus-level and sentence-level INLINEFORM0 broken down by sentence length in Table TABREF44 , averaged across 4 different runs of each model. Experiments with RNNGs For experiments on supervising RNNGs with induced trees, we use the parameterization and hyperparameters from BIBREF17 , which uses a 2-layer 650-dimensional stack LSTM (with dropout of 0.5) and a 650-dimensional tree LSTM BIBREF88 , BIBREF90 as the composition function. Concretely, the generative story is as follows: first, the stack representation is used to predict the next action (shift or reduce) via an affine transformation followed by a sigmoid. If shift is chosen, we obtain a distribution over the vocabulary via another affine transformation over the stack representation followed by a softmax. Then we sample the next word from this distribution and shift the generated word onto the stack using the stack LSTM. If reduce is chosen, we pop the last two elements off the stack and use the tree LSTM to obtain a new representation. This new representation is shifted onto the stack via the stack LSTM. Note that this RNNG parameterization is slightly different than the original from BIBREF53 , which does not ignore constituent labels and utilizes a bidirectional LSTM as the composition function instead of a tree LSTM. As our RNNG parameterization only works with binary trees, we binarize the gold trees with right binarization for the RNNG trained on gold trees (trees from the unsupervised methods explored in this paper are already binary). The RNNG also trains a discriminative parser alongside the generative model for evaluation with importance sampling. We use a CRF parser whose span score parameterization is similar similar to recent works BIBREF89 , BIBREF87 , BIBREF85 : position embeddings are added to word embeddings, and a bidirectional LSTM with 256 hidden dimensions is run over the input representations to obtain the forward and backward hidden states. The score INLINEFORM0 for a constituent spanning the INLINEFORM1 -th and INLINEFORM2 -th word is given by, INLINEFORM0 where the MLP has a single hidden layer with INLINEFORM0 nonlinearity followed by layer normalization BIBREF84 . For experiments on fine-tuning the RNNG with the unsupervised RNNG, we take the discriminative parser (which is also pretrained alongside the RNNG on induced trees) to be the structured inference network for optimizing the evidence lower bound. We refer the reader to BIBREF17 and their open source implementation for additional details. We also observe that as noted by BIBREF17 , a URNNG trained from scratch on this version of PTB without punctuation failed to outperform a right-branching baseline. The LSTM language model baseline is the same size as the stack LSTM (i.e. 2 layers, 650 hidden units, dropout of 0.5), and is therefore equivalent to an RNNG with completely right branching trees. The PRPN/ON baselines for perplexity/syntactic evaluation in Table TABREF30 also have 2 layers with 650 hidden units and 0.5 dropout. Therefore all models considered in Table TABREF30 have roughly the same capacity. For all models we share input/output word embeddings BIBREF86 . Perplexity estimation for the RNNGs and the compound PCFG uses 1000 importance-weighted samples. For grammaticality judgment, we modify the publicly available dataset from BIBREF56 to only keep sentence pairs that did not have any unknown words with respect to our PTB vocabulary of 10K words. This results in 33K sentence pairs for evaluation. Nonterminal/Preterminal Alignments Figure FIGREF50 shows the part-of-speech alignments and Table TABREF46 shows the nonterminal label alignments for the compound PCFG/neural PCFG. Subtree Analysis Table TABREF53 lists more examples of constituents within each subtree as the top principical component is varied. Due to data sparsity, the subtree analysis is performed on the full dataset. See section UID36 for more details. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: which chinese datasets were used? Only give me the answer and do not output any other words.
Answer:
[ "Answer with content missing: (Data section) Chinese with version 5.1 of the Chinese Penn Treebank (CTB)" ]
276
3,513
single_doc_qa
81ed4d26644f40a690ac16706218cda6
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and "the bewildering dispossession of an entire people from their ancestral land." Ngũgĩ wrote the novel while he was a student at Makerere University. The book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement. Plot summary Njoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty. One day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father. Mwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school. For a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population. Jacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods. One day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition. Several months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God. Njoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice. Characters in Weep Not, Child Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible. Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression. Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi. Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II). Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as "entering politics") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo. Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother. Jacobo: Mwihaki's father and an important landowner. Chief of the village. Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school. Themes and motifs Weep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt. The novel also ponders the role of saviours and salvation. The author notes in his The River Between: "Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people." Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, "Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel." Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government. See also Things Fall Apart Death and the King's Horseman References External links Official homepage of Ngũgĩ wa Thiong'o BBC profile of Ngũgĩ wa Thiong'o Weep Not, Child at Google Books British Empire in fiction Novels set in colonial Africa Historical novels Kenyan English-language novels Novels by Ngũgĩ wa Thiong'o Novels set in Kenya 1964 novels Heinemann (publisher) books Postcolonial novels African Writers Series 1964 debut novels Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: When was Weep Not, Child first published? Only give me the answer and do not output any other words.
Answer:
[ "Weep Not, Child was first published in 1964." ]
276
2,059
single_doc_qa
0707717390e54f02b26e8e945cb40991
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Offensive content has become pervasive in social media and a reason of concern for government organizations, online communities, and social media platforms. One of the most common strategies to tackle the problem is to train systems capable of recognizing offensive content, which then can be deleted or set aside for human moderation. In the last few years, there have been several studies published on the application of computational methods to deal with this problem. Most prior work focuses on a different aspect of offensive language such as abusive language BIBREF0 , BIBREF1 , (cyber-)aggression BIBREF2 , (cyber-)bullying BIBREF3 , BIBREF4 , toxic comments INLINEFORM0 , hate speech BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , and offensive language BIBREF11 . Prior work has focused on these aspects of offensive language in Twitter BIBREF3 , BIBREF7 , BIBREF8 , BIBREF11 , Wikipedia comments, and Facebook posts BIBREF2 . Recently, Waseem et. al. ( BIBREF12 ) acknowledged the similarities among prior work and discussed the need for a typology that differentiates between whether the (abusive) language is directed towards a specific individual or entity or towards a generalized group and whether the abusive content is explicit or implicit. Wiegand et al. ( BIBREF11 ) followed this trend as well on German tweets. In their evaluation, they have a task to detect offensive vs not offensive tweets and a second task for distinguishing between the offensive tweets as profanity, insult, or abuse. However, no prior work has explored the target of the offensive language, which is important in many scenarios, e.g., when studying hate speech with respect to a specific target. Therefore, we expand on these ideas by proposing a a hierarchical three-level annotation model that encompasses: Using this annotation model, we create a new large publicly available dataset of English tweets. The key contributions of this paper are as follows: Related Work Different abusive and offense language identification sub-tasks have been explored in the past few years including aggression identification, bullying detection, hate speech, toxic comments, and offensive language. Aggression identification: The TRAC shared task on Aggression Identification BIBREF2 provided participants with a dataset containing 15,000 annotated Facebook posts and comments in English and Hindi for training and validation. For testing, two different sets, one from Facebook and one from Twitter were provided. Systems were trained to discriminate between three classes: non-aggressive, covertly aggressive, and overtly aggressive. Bullying detection: Several studies have been published on bullying detection. One of them is the one by xu2012learning which apply sentiment analysis to detect bullying in tweets. xu2012learning use topic models to to identify relevant topics in bullying. Another related study is the one by dadvar2013improving which use user-related features such as the frequency of profanity in previous messages to improve bullying detection. Hate speech identification: It is perhaps the most widespread abusive language detection sub-task. There have been several studies published on this sub-task such as kwok2013locate and djuric2015hate who build a binary classifier to distinguish between `clean' comments and comments containing hate speech and profanity. More recently, Davidson et al. davidson2017automated presented the hate speech detection dataset containing over 24,000 English tweets labeled as non offensive, hate speech, and profanity. Offensive language: The GermEval BIBREF11 shared task focused on Offensive language identification in German tweets. A dataset of over 8,500 annotated tweets was provided for a course-grained binary classification task in which systems were trained to discriminate between offensive and non-offensive tweets and a second task where the organizers broke down the offensive class into three classes: profanity, insult, and abuse. Toxic comments: The Toxic Comment Classification Challenge was an open competition at Kaggle which provided participants with comments from Wikipedia labeled in six classes: toxic, severe toxic, obscene, threat, insult, identity hate. While each of these sub-tasks tackle a particular type of abuse or offense, they share similar properties and the hierarchical annotation model proposed in this paper aims to capture this. Considering that, for example, an insult targeted at an individual is commonly known as cyberbulling and that insults targeted at a group are known as hate speech, we pose that OLID's hierarchical annotation model makes it a useful resource for various offensive language identification sub-tasks. Hierarchically Modelling Offensive Content In the OLID dataset, we use a hierarchical annotation model split into three levels to distinguish between whether language is offensive or not (A), and type (B) and target (C) of the offensive language. Each level is described in more detail in the following subsections and examples are shown in Table TABREF10 . Level A: Offensive language Detection Level A discriminates between offensive (OFF) and non-offensive (NOT) tweets. Not Offensive (NOT): Posts that do not contain offense or profanity; Offensive (OFF): We label a post as offensive if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct. This category includes insults, threats, and posts containing profane language or swear words. Level B: Categorization of Offensive Language Level B categorizes the type of offense and two labels are used: targeted (TIN) and untargeted (INT) insults and threats. Targeted Insult (TIN): Posts which contain an insult/threat to an individual, group, or others (see next layer); Untargeted (UNT): Posts containing non-targeted profanity and swearing. Posts with general profanity are not targeted, but they contain non-acceptable language. Level C: Offensive Language Target Identification Level C categorizes the targets of insults and threats as individual (IND), group (GRP), and other (OTH). Individual (IND): Posts targeting an individual. It can be a a famous person, a named individual or an unnamed participant in the conversation. Insults and threats targeted at individuals are often defined as cyberbulling. Group (GRP): The target of these offensive posts is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or other common characteristic. Many of the insults and threats targeted at a group correspond to what is commonly understood as hate speech. Other (OTH): The target of these offensive posts does not belong to any of the previous two categories (e.g. an organization, a situation, an event, or an issue). Data Collection The data included in OLID has been collected from Twitter. We retrieved the data using the Twitter API by searching for keywords and constructions that are often included in offensive messages, such as `she is' or `to:BreitBartNews'. We carried out a first round of trial annotation of 300 instances with six experts. The goal of the trial annotation was to 1) evaluate the proposed tagset; 2) evaluate the data retrieval method; and 3) create a gold standard with instances that could be used as test questions in the training and test setting annotation which was carried out using crowdsourcing. The breakdown of keywords and their offensive content in the trial data of 300 tweets is shown in Table TABREF14 . We included a left (@NewYorker) and far-right (@BreitBartNews) news accounts because there tends to be political offense in the comments. One of the best offensive keywords was tweets that were flagged as not being safe by the Twitter `safe' filter (the `-' indicates `not safe'). The vast majority of content on Twitter is not offensive so we tried different strategies to keep a reasonable number of tweets in the offensive class amounting to around 30% of the dataset including excluding some keywords that were not high in offensive content such as `they are` and `to:NewYorker`. Although `he is' is lower in offensive content we kept it as a keyword to avoid gender bias. In addition to the keywords in the trial set, we searched for more political keywords which tend to be higher in offensive content, and sampled our dataset such that 50% of the the tweets come from political keywords and 50% come from non-political keywords. In addition to the keywords `gun control', and `to:BreitbartNews', political keywords used to collect these tweets are `MAGA', `antifa', `conservative' and `liberal'. We computed Fliess' INLINEFORM0 on the trial set for the five annotators on 21 of the tweets. INLINEFORM1 is .83 for Layer A (OFF vs NOT) indicating high agreement. As to normalization and anonymization, no user metadata or Twitter IDs have been stored, and URLs and Twitter mentions have been substituted to placeholders. We follow prior work in related areas (burnap2015cyber,davidson2017automated) and annotate our data using crowdsourcing using the platform Figure Eight. We ensure data quality by: 1) we only received annotations from individuals who were experienced in the platform; and 2) we used test questions to discard annotations of individuals who did not reach a certain threshold. Each instance in the dataset was annotated by multiple annotators and inter-annotator agreement has been calculated. We first acquired two annotations for each instance. In case of 100% agreement, we considered these as acceptable annotations, and in case of disagreement, we requested more annotations until the agreement was above 66%. After the crowdsourcing annotation, we used expert adjudication to guarantee the quality of the annotation. The breakdown of the data into training and testing for the labels from each level is shown in Table TABREF15 . Experiments and Evaluation We assess our dataset using traditional and deep learning methods. Our simplest model is a linear SVM trained on word unigrams. SVMs have produced state-of-the-art results for many text classification tasks BIBREF13 . We also train a bidirectional Long Short-Term-Memory (BiLSTM) model, which we adapted from the sentiment analysis system of sentimentSystem,rasooli2018cross and altered to predict offensive labels instead. It consists of (1) an input embedding layer, (2) a bidirectional LSTM layer, (3) an average pooling layer of input features. The concatenation of the LSTM's and average pool layer is passed through a dense layer and the output is passed through a softmax function. We set two input channels for the input embedding layers: pre-trained FastText embeddings BIBREF14 , as well as updatable embeddings learned by the model during training. Finally, we also apply a Convolutional Neural Network (CNN) model based on the architecture of BIBREF15 , using the same multi-channel inputs as the above BiLSTM. Our models are trained on the training data, and evaluated by predicting the labels for the held-out test set. The distribution is described in Table TABREF15 . We evaluate and compare the models using the macro-averaged F1-score as the label distribution is highly imbalanced. Per-class Precision (P), Recall (R), and F1-score (F1), also with other averaged metrics are also reported. The models are compared against baselines of predicting all labels as the majority or minority classes. Offensive Language Detection The performance on discriminating between offensive (OFF) and non-offensive (NOT) posts is reported in Table TABREF18 . We can see that all systems perform significantly better than chance, with the neural models being substantially better than the SVM. The CNN outperforms the RNN model, achieving a macro-F1 score of 0.80. Categorization of Offensive Language In this experiment, the two systems were trained to discriminate between insults and threats (TIN) and untargeted (UNT) offenses, which generally refer to profanity. The results are shown in Table TABREF19 . The CNN system achieved higher performance in this experiment compared to the BiLSTM, with a macro-F1 score of 0.69. All systems performed better at identifying target and threats (TIN) than untargeted offenses (UNT). Offensive Language Target Identification The results of the offensive target identification experiment are reported in Table TABREF20 . Here the systems were trained to distinguish between three targets: a group (GRP), an individual (IND), or others (OTH). All three models achieved similar results far surpassing the random baselines, with a slight performance edge for the neural models. The performance of all systems for the OTH class is 0. This poor performances can be explained by two main factors. First, unlike the two other classes, OTH is a heterogeneous collection of targets. It includes offensive tweets targeted at organizations, situations, events, etc. making it more challenging for systems to learn discriminative properties of this class. Second, this class contains fewer training instances than the other two. There are only 395 instances in OTH, and 1,075 in GRP, and 2,407 in IND. Conclusion and Future Work This paper presents OLID, a new dataset with annotation of type and target of offensive language. OLID is the official dataset of the shared task SemEval 2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval) BIBREF16 . In OffensEval, each annotation level in OLID is an independent sub-task. The dataset contains 14,100 tweets and is released freely to the research community. To the best of our knowledge, this is the first dataset to contain annotation of type and target of offenses in social media and it opens several new avenues for research in this area. We present baseline experiments using SVMs and neural networks to identify the offensive tweets, discriminate between insults, threats, and profanity, and finally to identify the target of the offensive messages. The results show that this is a challenging task. A CNN-based sentence classifier achieved the best results in all three sub-tasks. In future work, we would like to make a cross-corpus comparison of OLID and datasets annotated for similar tasks such as aggression identification BIBREF2 and hate speech detection BIBREF8 . This comparison is, however, far from trivial as the annotation of OLID is different. Acknowledgments The research presented in this paper was partially supported by an ERAS fellowship awarded by the University of Wolverhampton. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What models are used in the experiment? Only give me the answer and do not output any other words.
Answer:
[ "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "linear SVM, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN)", "linear SVM trained on word unigrams, bidirectional Long Short-Term-Memory (BiLSTM), Convolutional Neural Network (CNN) " ]
276
2,998
single_doc_qa
988771dd304641869df0262f3794c1c6
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Deep Neural Networks (DNN) have been widely employed in industry for solving various Natural Language Processing (NLP) tasks, such as text classification, sequence labeling, question answering, etc. However, when engineers apply DNN models to address specific NLP tasks, they often face the following challenges. The above challenges often hinder the productivity of engineers, and result in less optimal solutions to their given tasks. This motivates us to develop an NLP toolkit for DNN models, which facilitates engineers to develop DNN approaches. Before designing this NLP toolkit, we conducted a survey among engineers and identified a spectrum of three typical personas. To satisfy the requirements of all the above three personas, the NLP toolkit has to be generic enough to cover as many tasks as possible. At the same time, it also needs to be flexible enough to allow alternative network architectures as well as customized modules. Therefore, we analyzed the NLP jobs submitted to a commercial centralized GPU cluster. Table TABREF11 showed that about 87.5% NLP related jobs belong to a few common tasks, including sentence classification, text matching, sequence labeling, MRC, etc. It further suggested that more than 90% of the networks were composed of several common components, such as embedding, CNN/RNN, Transformer and so on. Based on the above observations, we developed NeuronBlocks, a DNN toolkit for NLP tasks. The basic idea is to provide two layers of support to the engineers. The upper layer targets common NLP tasks. For each task, the toolkit contains several end-to-end network templates, which can be immediately instantiated with simple configuration. The bottom layer consists of a suite of reusable and standard components, which can be adopted as building blocks to construct networks with complex architecture. By following the interface guidelines, users can also contribute to this gallery of components with their own modules. The technical contributions of NeuronBlocks are summarized into the following three aspects. Related Work There are several general-purpose deep learning frameworks, such as TensorFlow, PyTorch and Keras, which have gained popularity in NLP community. These frameworks offer huge flexibility in DNN model design and support various NLP tasks. However, building models under these frameworks requires a large overhead of mastering these framework details. Therefore, higher level abstraction to hide the framework details is favored by many engineers. There are also several popular deep learning toolkits in NLP, including OpenNMT BIBREF0 , AllenNLP BIBREF1 etc. OpenNMT is an open-source toolkit mainly targeting neural machine translation or other natural language generation tasks. AllenNLP provides several pre-built models for NLP tasks, such as semantic role labeling, machine comprehension, textual entailment, etc. Although these toolkits reduce the development cost, they are limited to certain tasks, and thus not flexible enough to support new network architectures or new components. Design The Neuronblocks is built on PyTorch. The overall framework is illustrated in Figure FIGREF16 . It consists of two layers: the Block Zoo and the Model Zoo. In Block Zoo, the most commonly used components of deep neural networks are categorized into several groups according to their functions. Within each category, several alternative components are encapsulated into standard and reusable blocks with a consistent interface. These blocks serve as basic and exchangeable units to construct complex network architectures for different NLP tasks. In Model Zoo, the most popular NLP tasks are identified. For each task, several end-to-end network templates are provided in the form of JSON configuration files. Users can simply browse these configurations and choose one or more to instantiate. The whole task can be completed without any coding efforts. Block Zoo We recognize the following major functional categories of neural network components. Each category covers as many commonly used modules as possible. The Block Zoo is an open framework, and more modules can be added in the future. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Embedding Layer: Word/character embedding and extra handcrafted feature embedding such as pos-tagging are supported. Neural Network Layers: Block zoo provides common layers like RNN, CNN, QRNN BIBREF2 , Transformer BIBREF3 , Highway network, Encoder Decoder architecture, etc. Furthermore, attention mechanisms are widely used in neural networks. Thus we also support multiple attention layers, such as Linear/Bi-linear Attention, Full Attention BIBREF4 , Bidirectional attention flow BIBREF5 , etc. Meanwhile, regularization layers such as Dropout, Layer Norm, Batch Norm, etc are also supported for improving generalization ability. Loss Function: Besides of the loss functions built in PyTorch, we offer more options such as Focal Loss BIBREF6 . Metrics: For classification task, AUC, Accuracy, Precision/Recall, F1 metrics are supported. For sequence labeling task, F1/Accuracy are supported. For knowledge distillation task, MSE/RMSE are supported. For MRC task, ExactMatch/F1 are supported. Model Zoo In NeuronBlocks, we identify four types of most popular NLP tasks. For each task, we provide various end-to-end network templates. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] Text Classification and Matching. Tasks such as domain/intent classification, question answer matching are supported. Sequence Labeling. Predict each token in a sequence into predefined types. Common tasks include NER, POS tagging, Slot tagging, etc. Knowledge Distillation BIBREF7 . Teacher-Student based knowledge distillation is one common approach for model compression. NeuronBlocks provides knowledge distillation template to improve the inference speed of heavy DNN models like BERT/GPT. Extractive Machine Reading Comprehension. Given a pair of question and passage, predict the start and end positions of the answer spans in the passage. User Interface NeuronBlocks provides convenient user interface for users to build, train, and test DNN models. The details are described in the following. [itemsep= -0.4em,topsep = 0.3em, align=left, labelsep=-0.6em, leftmargin=1.2em] I/O interface. This part defines model input/output, such as training data, pre-trained models/embeddings, model saving path, etc. Model Architecture interface. This is the key part of the configuration file, which defines the whole model architecture. Figure FIGREF19 shows an example of how to specify a model architecture using the blocks in NeuronBlocks. To be more specific, it consists of a list of layers/blocks to construct the architecture, where the blocks are supplied in the gallery of Block Zoo. Training Parameters interface. In this part, the model optimizer as well as all other training hyper parameters are indicated. Workflow Figure FIGREF34 shows the workflow of building DNN models in NeuronBlocks. Users only need to write a JSON configuration file. They can either instantiate an existing template from Model Zoo, or construct a new architecture based on the blocks from Block Zoo. This configuration file is shared across training, test, and prediction. For model hyper-parameter tuning or architecture modification, users just need to change the JSON configuration file. Advanced users can also contribute novel customized blocks into Block Zoo, as long as they follow the same interface guidelines with the existing blocks. These new blocks can be further shared across all users for model architecture design. Moreover, NeuronBlocks has flexible platform support, such as GPU/CPU, GPU management platforms like PAI. Experiments To verify the performance of NeuronBlocks, we conducted extensive experiments for common NLP tasks on public data sets including CoNLL-2003 BIBREF14 , GLUE benchmark BIBREF13 , and WikiQA corpus BIBREF15 . The experimental results showed that the models built with NeuronBlocks can achieve reliable and competitive results on various tasks, with productivity greatly improved. Sequence Labeling For sequence labeling task, we evaluated NeuronBlocks on CoNLL-2003 BIBREF14 English NER dataset, following most works on the same task. This dataset includes four types of named entities, namely, PERSON, LOCATION, ORGANIZATION, and MISC. We adopted the BIOES tagging scheme instead of IOB, as many previous works indicated meaningful improvement with BIOES scheme BIBREF16 , BIBREF17 . Table TABREF28 shows the results on CoNLL-2003 Englist testb dataset, with 12 different combinations of network layers/blocks, such as word/character embedding, CNN/LSTM and CRF. The results suggest that the flexible combination of layers/blocks in NeuronBlocks can easily reproduce the performance of original models, with comparative or slightly better performance. GLUE Benchmark The General Language Understanding Evaluation (GLUE) benchmark BIBREF13 is a collection of natural language understanding tasks. We experimented on the GLUE benchmark tasks using BiLSTM and Attention based models. As shown in Table TABREF29 , the models built by NeuronBlocks can achieve competitive or even better results on GLUE tasks with minimal coding efforts. Knowledge Distillation We evaluated Knowledge Distillation task in NeuronBlocks on a dataset collected from one commercial search engine. We refer to this dataset as Domain Classification Dataset. Each sample in this dataset consists of two parts, i.e., a question and a binary label indicating whether the question belongs to a specific domain. Table TABREF36 shows the results, where Area Under Curve (AUC) metric is used as the performance evaluation criteria and Queries per Second (QPS) is used to measure inference speed. By knowledge distillation training approach, the student model by NeuronBlocks managed to get 23-27 times inference speedup with only small performance regression compared with BERTbase fine-tuned classifier. WikiQA The WikiQA corpus BIBREF15 is a publicly available dataset for open-domain question answering. This dataset contains 3,047 questions from Bing query logs, each associated with some candidate answer sentences from Wikipedia. We conducted experiments on WikiQA dataset using CNN, BiLSTM, and Attention based models. The results are shown in Table TABREF41 . The models built in NeuronBlocks achieved competitive or even better results with simple model configurations. Conclusion and Future Work In this paper, we introduce NeuronBlocks, a DNN toolkit for NLP tasks built on PyTorch. NeuronBlocks targets three types of engineers, and provides a two-layer solution to satisfy the requirements from all three types of users. To be more specific, the Model Zoo consists of various templates for the most common NLP tasks, while the Block Zoo supplies a gallery of alternative layers/modules for the networks. Such design achieves a balance between generality and flexibility. Extensive experiments have verified the effectiveness of this approach. NeuronBlocks has been widely used in a product team of a commercial search engine, and significantly improved the productivity for developing NLP DNN approaches. As an open-source toolkit, we will further extend it in various directions. The following names a few examples. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What neural network modules are included in NeuronBlocks? Only give me the answer and do not output any other words.
Answer:
[ "Embedding Layer, Neural Network Layers, Loss Function, Metrics", "Embedding Layer, Neural Network Layers, Loss Function, Metrics" ]
276
2,307
single_doc_qa
3d5b2e58fdb34eb4ae99ef0be5ba8187
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction From a group of small users at the time of its inception in 2009, Quora has evolved in the last few years into one of the largest community driven Q&A sites with diverse user communities. With the help of efficient content moderation/review policies and active in-house review team, efficient Quora bots, this site has emerged into one of the largest and reliable sources of Q&A on the Internet. On Quora, users can post questions, follow questions, share questions, tag them with relevant topics, follow topics, follow users apart from answering, commenting, upvoting/downvoting etc. The integrated social structure at the backbone of it and the topical organization of its rich content have made Quora unique with respect to other Q&A sites like Stack Overflow, Yahoo! Answers etc. and these are some of the prime reasons behind its popularity in recent times. Quality question posting and getting them answered are the key objectives of any Q&A site. In this study we focus on the answerability of questions on Quora, i.e., whether a posted question shall eventually get answered. In Quora, the questions with no answers are referred to as “open questions”. These open questions need to be studied separately to understand the reason behind their not being answered or to be precise, are there any characteristic differences between `open' questions and the answered ones. For example, the question “What are the most promising advances in the treatment of traumatic brain injuries?” was posted on Quora on 23rd June, 2011 and got its first answer after almost 2 years on 22nd April, 2013. The reason that this question remained open so long might be the hardness of answering it and the lack of visibility and experts in the domain. Therefore, it is important to identify the open questions and take measures based on the types - poor quality questions can be removed from Quora and the good quality questions can be promoted so that they get more visibility and are eventually routed to topical experts for better answers. Characterization of the questions based on question quality requires expert human interventions often judging if a question would remain open based on factors like if it is subjective, controversial, open-ended, vague/imprecise, ill-formed, off-topic, ambiguous, uninteresting etc. Collecting judgment data for thousands of question posts is a very expensive process. Therefore, such an experiment can be done only for a small set of questions and it would be practically impossible to scale it up for the entire collection of posts on the Q&A site. In this work, we show that appropriate quantification of various linguistic activities can naturally correspond to many of the judgment factors mentioned above (see table 2 for a collection of examples). These quantities encoding such linguistic activities can be easily measured for each question post and thus helps us to have an alternative mechanism to characterize the answerability on the Q&A site. There are several research works done in Q&A focusing on content of posts. BIBREF0 exploit community feedback to identify high quality content on Yahoo! Answers. BIBREF1 use textual features to predict answer quality on Yahoo! Answers. BIBREF2 , investigate predictors of answer quality through a comparative, controlled field study of user responses. BIBREF3 study the problem of how long questions remain unanswered. BIBREF4 propose a prediction model on how many answers a question shall receive. BIBREF5 analyze and predict unanswered questions on Yahoo Answers. BIBREF6 study question quality in Yahoo! Answers. Dataset description We obtained our Quora dataset BIBREF7 through web-based crawls between June 2014 to August 2014. This crawling exercise has resulted in the accumulation of a massive Q&A dataset spanning over a period of over four years starting from January 2010 to May 2014. We initiated crawling with 100 questions randomly selected from different topics so that different genre of questions can be covered. The crawling of the questions follow a BFS pattern through the related question links. We obtained 822,040 unique questions across 80,253 different topics with a total of 1,833,125 answers to these questions. For each question, we separately crawl their revision logs that contain different types of edit information for the question and the activity log of the question asker. Linguistic activities on Quora In this section, we identify various linguistic activities on Quora and propose quantifications of the language usage patterns in this Q&A site. In particular, we show that there exists significant differences in the linguistic structure of the open and the answered questions. Note that most of the measures that we define are simple, intuitive and can be easily obtained automatically from the data (without manual intervention). Therefore the framework is practical, inexpensive and highly scalable. Content of a question text is important to attract people and make them engage more toward it. The linguistic structure (i.e., the usage of POS tags, the use of Out-of-Vocabulary words, character usage etc.) one adopts are key factors for answerability of questions. We shall discuss the linguistic structure that often represents the writing style of a question asker. In fig 1 (a), we observe that askers of open questions generally use more no. of words compared to answered questions. To understand the nature of words (standard English words or chat-like words frequently used in social media) used in the text, we compare the words with GNU Aspell dictionary to see whether they are present in the dictionary or not. We observe that both open questions and answered questions follow similar distribution (see fig 1 (b)). Part-of-Speech (POS) tags are indicators of grammatical aspects of texts. To observe how the Part-of-Speech tags are distributed in the question texts, we define a diversity metric. We use the standard CMU POS tagger BIBREF8 for identifying the POS tags of the constituent words in the question. We define the POS tag diversity (POSDiv) of a question $q_i$ as follows: $POSDiv(q_i) = -\sum _{j \in pos_{set}}p_j\times \log (p_j)$ where $p_j$ is the probability of the $j^{th}$ POS in the set of POS tags. Fig 1 (c) shows that the answered questions have lower POS tag diversity compared to open questions. Question texts undergo several edits so that their readability and the engagement toward them are enhanced. It is interesting to identify how far such edits can make the question different from the original version of it. To capture this phenomena, we have adopted ROUGE-LCS recall BIBREF9 from the domain of text summarization. Higher the recall value, lesser are the changes in the question text. From fig 1 (d), we observe that open questions tend to have higher recall compared to the answered ones which suggests that they have not gone through much of text editing thus allowing for almost no scope of readability enhancement. Psycholinguistic analysis: The way an individual talks or writes, give us clue to his/her linguistic, emotional, and cognitive states. A question asker's linguistic, emotional, cognitive states are also revealed through the language he/she uses in the question text. In order to capture such psycholinguistic aspects of the asker, we use Linguistic Inquiry and Word Count (LIWC) BIBREF10 that analyzes various emotional, cognitive, and structural components present in individuals' written texts. LIWC takes a text document as input and outputs a score for the input for each of the LIWC categories such as linguistic (part-of-speech of the words, function words etc.) and psychological categories (social, anger, positive emotion, negative emotion, sadness etc.) based on the writing style and psychometric properties of the document. In table 1 , we perform a comparative analysis of the asker's psycholinguistic state while asking an open question and an answered question. Askers of open questions use more function words, impersonal pronouns, articles on an average whereas asker of answered questions use more personal pronouns, conjunctions and adverbs to describe their questions. Essentially, open questions lack content words compared to answered questions which, in turn, affects the readability of the question. As far as the psychological aspects are concerned, answered question askers tend to use more social, family, human related words on average compared to an open question asker. The open question askers express more positive emotions whereas the answered question asker tend to express more negative emotions in their texts. Also, answered question askers are more emotionally involved and their questions reveal higher usage of anger, sadness, anxiety related words compared to that of open questions. Open questions, on the other hand, contains more sexual, body, health related words which might be reasons why they do not attract answers. In table 2 , we show a collection of examples of open questions to illustrate that many of the above quantities based on the linguistic activities described in this section naturally correspond to the factors that human judges consider responsible for a question remaining unanswered. This is one of the prime reasons why these quantities qualify as appropriate indicators of answerability. Prediction model In this section, we describe the prediction framework in detail. Our goal is to predict whether a given question after a time period $t$ will be answered or not. Linguistic styles of the question asker The content and way of posing a question is important to attract answers. We have observed in the previous section that these linguistic as well as psycholinguistic aspects of the question asker are discriminatory factors. For the prediction, we use the following features: Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Do the answered questions measure for the usefulness of the answer? Only give me the answer and do not output any other words.
Answer:
[ "No" ]
276
1,948
single_doc_qa
4cc15675979b41b181ccde4beba00298
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Text simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help children, non-native speakers, and people with cognitive disabilities, to understand text better. One of the methods of automatic text simplification can be generally divided into three categories: lexical simplification (LS) BIBREF0 , BIBREF1 , rule-based BIBREF2 , and machine translation (MT) BIBREF3 , BIBREF4 . LS is mainly used to simplify text by substituting infrequent and difficult words with frequent and easier words. However, there are several challenges for the LS approach: a great number of transformation rules are required for reasonable coverage and should be applied based on the specific context; third, the syntax and semantic meaning of the sentence is hard to retain. Rule-based approaches use hand-crafted rules for lexical and syntactic simplification, for example, substituting difficult words in a predefined vocabulary. However, such approaches need a lot of human-involvement to manually define these rules, and it is impossible to give all possible simplification rules. MT-based approach has attracted great attention in the last several years, which addresses text simplification as a monolingual machine translation problem translating from 'ordinary' and 'simplified' sentences. In recent years, neural Machine Translation (NMT) is a newly-proposed deep learning approach and achieves very impressive results BIBREF5 , BIBREF6 , BIBREF7 . Unlike the traditional phrased-based machine translation system which operates on small components separately, NMT system is being trained end-to-end, without the need to have external decoders, language models or phrase tables. Therefore, the existing architectures in NMT are used for text simplification BIBREF8 , BIBREF4 . However, most recent work using NMT is limited to the training data that are scarce and expensive to build. Language models trained on simplified corpora have played a central role in statistical text simplification BIBREF9 , BIBREF10 . One main reason is the amount of available simplified corpora typically far exceeds the amount of parallel data. The performance of models can be typically improved when trained on more data. Therefore, we expect simplified corpora to be especially helpful for NMT models. In contrast to previous work, which uses the existing NMT models, we explore strategy to include simplified training corpora in the training process without changing the neural network architecture. We first propose to pair simplified training sentences with synthetic ordinary sentences during training, and treat this synthetic data as additional training data. We obtain synthetic ordinary sentences through back-translation, i.e. an automatic translation of the simplified sentence into the ordinary sentence BIBREF11 . Then, we mix the synthetic data into the original (simplified-ordinary) data to train NMT model. Experimental results on two publicly available datasets show that we can improve the text simplification quality of NMT models by mixing simplified sentences into the training set over NMT model only using the original training data. Related Work Automatic TS is a complicated natural language processing (NLP) task, which consists of lexical and syntactic simplification levels BIBREF12 . It has attracted much attention recently as it could make texts more accessible to wider audiences, and used as a pre-processing step, improve performances of various NLP tasks and systems BIBREF13 , BIBREF14 , BIBREF15 . Usually, hand-crafted, supervised, and unsupervised methods based on resources like English Wikipedia and Simple English Wikipedia (EW-SEW) BIBREF10 are utilized for extracting simplification rules. It is very easy to mix up the automatic TS task and the automatic summarization task BIBREF3 , BIBREF16 , BIBREF6 . TS is different from text summarization as the focus of text summarization is to reduce the length and redundant content. At the lexical level, lexical simplification systems often substitute difficult words using more common words, which only require a large corpus of regular text to obtain word embeddings to get words similar to the complex word BIBREF1 , BIBREF9 . Biran et al. BIBREF0 adopted an unsupervised method for learning pairs of complex and simpler synonyms from a corpus consisting of Wikipedia and Simple Wikipedia. At the sentence level, a sentence simplification model was proposed by tree transformation based on statistical machine translation (SMT) BIBREF3 . Woodsend and Lapata BIBREF17 presented a data-driven model based on a quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. Wubben et al. BIBREF18 proposed a phrase-based machine translation (PBMT) model that is trained on ordinary-simplified sentence pairs. Xu et al. BIBREF19 proposed a syntax-based machine translation model using simplification-specific objective functions and features to encourage simpler output. Compared with SMT, neural machine translation (NMT) has shown to produce state-of-the-art results BIBREF5 , BIBREF7 . The central approach of NMT is an encoder-decoder architecture implemented by recurrent neural networks, which can represent the input sequence as a vector, and then decode that vector into an output sequence. Therefore, NMT models were used for text simplification task, and achieved good results BIBREF8 , BIBREF4 , BIBREF20 . The main limitation of the aforementioned NMT models for text simplification depended on the parallel ordinary-simplified sentence pairs. Because ordinary-simplified sentence pairs are expensive and time-consuming to build, the available largest data is EW-SEW that only have 296,402 sentence pairs. The dataset is insufficiency for NMT model if we want to NMT model can obtain the best parameters. Considering simplified data plays an important role in boosting fluency for phrase-based text simplification, and we investigate the use of simplified data for text simplification. We are the first to show that we can effectively adapt neural translation models for text simplifiation with simplified corpora. Simplified Corpora We collected a simplified dataset from Simple English Wikipedia that are freely available, which has been previously used for many text simplification methods BIBREF0 , BIBREF10 , BIBREF3 . The simple English Wikipedia is pretty easy to understand than normal English Wikipedia. We downloaded all articles from Simple English Wikipedia. For these articles, we removed stubs, navigation pages and any article that consisted of a single sentence. We then split them into sentences with the Stanford CorNLP BIBREF21 , and deleted these sentences whose number of words are smaller than 10 or large than 40. After removing repeated sentences, we chose 600K sentences as the simplified data with 11.6M words, and the size of vocabulary is 82K. Text Simplification using Neural Machine Translation Our work is built on attention-based NMT BIBREF5 as an encoder-decoder network with recurrent neural networks (RNN), which simultaneously conducts dynamic alignment and generation of the target simplified sentence. The encoder uses a bidirectional RNN that consists of forward and backward RNN. Given a source sentence INLINEFORM0 , the forward RNN and backward RNN calculate forward hidden states INLINEFORM1 and backward hidden states INLINEFORM2 , respectively. The annotation vector INLINEFORM3 is obtained by concatenating INLINEFORM4 and INLINEFORM5 . The decoder is a RNN that predicts a target simplificated sentence with Gated Recurrent Unit (GRU) BIBREF22 . Given the previously generated target (simplified) sentence INLINEFORM0 , the probability of next target word INLINEFORM1 is DISPLAYFORM0 where INLINEFORM0 is a non-linear function, INLINEFORM1 is the embedding of INLINEFORM2 , and INLINEFORM3 is a decoding state for time step INLINEFORM4 . State INLINEFORM0 is calculated by DISPLAYFORM0 where INLINEFORM0 is the activation function GRU. The INLINEFORM0 is the context vector computed as a weighted annotation INLINEFORM1 , computed by DISPLAYFORM0 where the weight INLINEFORM0 is computed by DISPLAYFORM0 DISPLAYFORM1 where INLINEFORM0 , INLINEFORM1 and INLINEFORM2 are weight matrices. The training objective is to maximize the likelihood of the training data. Beam search is employed for decoding. Synthetic Simplified Sentences We train an auxiliary system using NMT model from the simplified sentence to the ordinary sentence, which is first trained on the available parallel data. For leveraging simplified sentences to improve the quality of NMT model for text simplification, we propose to adapt the back-translation approach proposed by Sennrich et al. BIBREF11 to our scenario. More concretely, Given one sentence in simplified sentences, we use the simplified-ordinary system in translate mode with greedy decoding to translate it to the ordinary sentences, which is denoted as back-translation. This way, we obtain a synthetic parallel simplified-ordinary sentences. Both the synthetic sentences and the available parallel data are used as training data for the original NMT system. Evaluation We evaluate the performance of text simplification using neural machine translation on available parallel sentences and additional simplified sentences. Dataset. We use two simplification datasets (WikiSmall and WikiLarge). WikiSmall consists of ordinary and simplified sentences from the ordinary and simple English Wikipedias, which has been used as benchmark for evaluating text simplification BIBREF17 , BIBREF18 , BIBREF8 . The training set has 89,042 sentence pairs, and the test set has 100 pairs. WikiLarge is also from Wikipedia corpus whose training set contains 296,402 sentence pairs BIBREF19 , BIBREF20 . WikiLarge includes 8 (reference) simplifications for 2,359 sentences split into 2,000 for development and 359 for testing. Metrics. Three metrics in text simplification are chosen in this paper. BLEU BIBREF5 is one traditional machine translation metric to assess the degree to which translated simplifications differed from reference simplifications. FKGL measures the readability of the output BIBREF23 . A small FKGL represents simpler output. SARI is a recent text-simplification metric by comparing the output against the source and reference simplifications BIBREF20 . We evaluate the output of all systems using human evaluation. The metric is denoted as Simplicity BIBREF8 . The three non-native fluent English speakers are shown reference sentences and output sentences. They are asked whether the output sentence is much simpler (+2), somewhat simpler (+1), equally (0), somewhat more difficult (-1), and much more difficult (-2) than the reference sentence. Methods. We use OpenNMT BIBREF24 as the implementation of the NMT system for all experiments BIBREF5 . We generally follow the default settings and training procedure described by Klein et al.(2017). We replace out-of-vocabulary words with a special UNK symbol. At prediction time, we replace UNK words with the highest probability score from the attention layer. OpenNMT system used on parallel data is the baseline system. To obtain a synthetic parallel training set, we back-translate a random sample of 100K sentences from the collected simplified corpora. OpenNMT used on parallel data and synthetic data is our model. The benchmarks are run on a Intel(R) Core(TM) i7-5930K [email protected], 32GB Mem, trained on 1 GPU GeForce GTX 1080 (Pascal) with CUDA v. 8.0. We choose three statistical text simplification systems. PBMT-R is a phrase-based method with a reranking post-processing step BIBREF18 . Hybrid performs sentence splitting and deletion operations based on discourse representation structures, and then simplifies sentences with PBMT-R BIBREF25 . SBMT-SARI BIBREF19 is syntax-based translation model using PPDB paraphrase database BIBREF26 and modifies tuning function (using SARI). We choose two neural text simplification systems. NMT is a basic attention-based encoder-decoder model which uses OpenNMT framework to train with two LSTM layers, hidden states of size 500 and 500 hidden units, SGD optimizer, and a dropout rate of 0.3 BIBREF8 . Dress is an encoder-decoder model coupled with a deep reinforcement learning framework, and the parameters are chosen according to the original paper BIBREF20 . For the experiments with synthetic parallel data, we back-translate a random sample of 60 000 sentences from the collected simplified sentences into ordinary sentences. Our model is trained on synthetic data and the available parallel data, denoted as NMT+synthetic. Results. Table 1 shows the results of all models on WikiLarge dataset. We can see that our method (NMT+synthetic) can obtain higher BLEU, lower FKGL and high SARI compared with other models, except Dress on FKGL and SBMT-SARI on SARI. It verified that including synthetic data during training is very effective, and yields an improvement over our baseline NMF by 2.11 BLEU, 1.7 FKGL and 1.07 SARI. We also substantially outperform Dress, who previously reported SOTA result. The results of our human evaluation using Simplicity are also presented in Table 1. NMT on synthetic data is significantly better than PBMT-R, Dress, and SBMT-SARI on Simplicity. It indicates that our method with simplified data is effective at creating simpler output. Results on WikiSmall dataset are shown in Table 2. We see substantial improvements (6.37 BLEU) than NMT from adding simplified training data with synthetic ordinary sentences. Compared with statistical machine translation models (PBMT-R, Hybrid, SBMT-SARI), our method (NMT+synthetic) still have better results, but slightly worse FKGL and SARI. Similar to the results in WikiLarge, the results of our human evaluation using Simplicity outperforms the other models. In conclusion, Our method produces better results comparing with the baselines, which demonstrates the effectiveness of adding simplified training data. Conclusion In this paper, we propose one simple method to use simplified corpora during training of NMT systems, with no changes to the network architecture. In the experiments on two datasets, we achieve substantial gains in all tasks, and new SOTA results, via back-translation of simplified sentences into the ordinary sentences, and treating this synthetic data as additional training data. Because we do not change the neural network architecture to integrate simplified corpora, our method can be easily applied to other Neural Text Simplification (NTS) systems. We expect that the effectiveness of our method not only varies with the quality of the NTS system used for back-translation, but also depends on the amount of available parallel and simplified corpora. In the paper, we have only utilized data from Wikipedia for simplified sentences. In the future, many other text sources are available and the impact of not only size, but also of domain should be investigated. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: what are the sizes of both datasets? Only give me the answer and do not output any other words.
Answer:
[ "training set has 89,042 sentence pairs, and the test set has 100 pairs, training set contains 296,402, 2,000 for development and 359 for testing", "WikiSmall 89 142 sentence pair and WikiLarge 298 761 sentence pairs. " ]
276
3,094
single_doc_qa
3c3db886dee74509bdf0412be7d6a783
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction End-to-end speech-to-text translation (ST) has attracted much attention recently BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 given its simplicity against cascading automatic speech recognition (ASR) and machine translation (MT) systems. The lack of labeled data, however, has become a major blocker for bridging the performance gaps between end-to-end models and cascading systems. Several corpora have been developed in recent years. post2013improved introduced a 38-hour Spanish-English ST corpus by augmenting the transcripts of the Fisher and Callhome corpora with English translations. di-gangi-etal-2019-must created the largest ST corpus to date from TED talks but the language pairs involved are out of English only. beilharz2019librivoxdeen created a 110-hour German-English ST corpus from LibriVox audiobooks. godard-etal-2018-low created a Moboshi-French ST corpus as part of a rare language documentation effort. woldeyohannis provided an Amharic-English ST corpus in the tourism domain. boito2019mass created a multilingual ST corpus involving 8 languages from a multilingual speech corpus based on Bible readings BIBREF7. Previous work either involves language pairs out of English, very specific domains, very low resource languages or a limited set of language pairs. This limits the scope of study, including the latest explorations on end-to-end multilingual ST BIBREF8, BIBREF9. Our work is mostly similar and concurrent to iranzosnchez2019europarlst who created a multilingual ST corpus from the European Parliament proceedings. The corpus we introduce has larger speech durations and more translation tokens. It is diversified with multiple speakers per transcript/translation. Finally, we provide additional out-of-domain test sets. In this paper, we introduce CoVoST, a multilingual ST corpus based on Common Voice BIBREF10 for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. It includes a total 708 hours of French (Fr), German (De), Dutch (Nl), Russian (Ru), Spanish (Es), Italian (It), Turkish (Tr), Persian (Fa), Swedish (Sv), Mongolian (Mn) and Chinese (Zh) speeches, with French and German ones having the largest durations among existing public corpora. We also collect an additional evaluation corpus from Tatoeba for French, German, Dutch, Russian and Spanish, resulting in a total of 9.3 hours of speech. Both corpora are created at the sentence level and do not require additional alignments or segmentation. Using the official Common Voice train-development-test split, we also provide baseline models, including, to our knowledge, the first end-to-end many-to-one multilingual ST models. CoVoST is released under CC0 license and free to use. The Tatoeba evaluation samples are also available under friendly CC licenses. All the data can be acquired at https://github.com/facebookresearch/covost. Data Collection and Processing ::: Common Voice (CoVo) Common Voice BIBREF10 is a crowdsourcing speech recognition corpus with an open CC0 license. Contributors record voice clips by reading from a bank of donated sentences. Each voice clip was validated by at least two other users. Most of the sentences are covered by multiple speakers, with potentially different genders, age groups or accents. Raw CoVo data contains samples that passed validation as well as those that did not. To build CoVoST, we only use the former one and reuse the official train-development-test partition of the validated data. As of January 2020, the latest CoVo 2019-06-12 release includes 29 languages. CoVoST is currently built on that release and covers the following 11 languages: French, German, Dutch, Russian, Spanish, Italian, Turkish, Persian, Swedish, Mongolian and Chinese. Validated transcripts were sent to professional translators. Note that the translators had access to the transcripts but not the corresponding voice clips since clips would not carry additional information. Since transcripts were duplicated due to multiple speakers, we deduplicated the transcripts before sending them to translators. As a result, different voice clips of the same content (transcript) will have identical translations in CoVoST for train, development and test splits. In order to control the quality of the professional translations, we applied various sanity checks to the translations BIBREF11. 1) For German-English, French-English and Russian-English translations, we computed sentence-level BLEU BIBREF12 with the NLTK BIBREF13 implementation between the human translations and the automatic translations produced by a state-of-the-art system BIBREF14 (the French-English system was a Transformer big BIBREF15 separately trained on WMT14). We applied this method to these three language pairs only as we are confident about the quality of the corresponding systems. Translations with a score that was too low were manually inspected and sent back to the translators when needed. 2) We manually inspected examples where the source transcript was identical to the translation. 3) We measured the perplexity of the translations using a language model trained on a large amount of clean monolingual data BIBREF14. We manually inspected examples where the translation had a high perplexity and sent them back to translators accordingly. 4) We computed the ratio of English characters in the translations. We manually inspected examples with a low ratio and sent them back to translators accordingly. 5) Finally, we used VizSeq BIBREF16 to calculate similarity scores between transcripts and translations based on LASER cross-lingual sentence embeddings BIBREF17. Samples with low scores were manually inspected and sent back for translation when needed. We also sanity check the overlaps of train, development and test sets in terms of transcripts and voice clips (via MD5 file hashing), and confirm they are totally disjoint. Data Collection and Processing ::: Tatoeba (TT) Tatoeba (TT) is a community built language learning corpus having sentences aligned across multiple languages with the corresponding speech partially available. Its sentences are on average shorter than those in CoVoST (see also Table TABREF2) given the original purpose of language learning. Sentences in TT are licensed under CC BY 2.0 FR and part of the speeches are available under various CC licenses. We construct an evaluation set from TT (for French, German, Dutch, Russian and Spanish) as a complement to CoVoST development and test sets. We collect (speech, transcript, English translation) triplets for the 5 languages and do not include those whose speech has a broken URL or is not CC licensed. We further filter these samples by sentence lengths (minimum 4 words including punctuations) to reduce the portion of short sentences. This makes the resulting evaluation set closer to real-world scenarios and more challenging. We run the same quality checks for TT as for CoVoST but we do not find poor quality translations according to our criteria. Finally, we report the overlap between CoVo transcripts and TT sentences in Table TABREF5. We found a minimal overlap, which makes the TT evaluation set a suitable additional test set when training on CoVoST. Data Analysis ::: Basic Statistics Basic statistics for CoVoST and TT are listed in Table TABREF2 including (unique) sentence counts, speech durations, speaker demographics (partially available) as well as vocabulary and token statistics (based on Moses-tokenized sentences by sacreMoses) on both transcripts and translations. We see that CoVoST has over 327 hours of German speeches and over 171 hours of French speeches, which, to our knowledge, corresponds to the largest corpus among existing public ST corpora (the second largest is 110 hours BIBREF18 for German and 38 hours BIBREF19 for French). Moreover, CoVoST has a total of 18 hours of Dutch speeches, to our knowledge, contributing the first public Dutch ST resource. CoVoST also has around 27-hour Russian speeches, 37-hour Italian speeches and 67-hour Persian speeches, which is 1.8 times, 2.5 times and 13.3 times of the previous largest public one BIBREF7. Most of the sentences (transcripts) in CoVoST are covered by multiple speakers with potentially different accents, resulting in a rich diversity in the speeches. For example, there are over 1,000 speakers and over 10 accents in the French and German development / test sets. This enables good coverage of speech variations in both model training and evaluation. Data Analysis ::: Speaker Diversity As we can see from Table TABREF2, CoVoST is diversified with a rich set of speakers and accents. We further inspect the speaker demographics in terms of sample distributions with respect to speaker counts, accent counts and age groups, which is shown in Figure FIGREF6, FIGREF7 and FIGREF8. We observe that for 8 of the 11 languages, at least 60% of the sentences (transcripts) are covered by multiple speakers. Over 80% of the French sentences have at least 3 speakers. And for German sentences, even over 90% of them have at least 5 speakers. Similarly, we see that a large portion of sentences are spoken in multiple accents for French, German, Dutch and Spanish. Speakers of each language also spread widely across different age groups (below 20, 20s, 30s, 40s, 50s, 60s and 70s). Baseline Results We provide baselines using the official train-development-test split on the following tasks: automatic speech recognition (ASR), machine translation (MT) and speech translation (ST). Baseline Results ::: Experimental Settings ::: Data Preprocessing We convert raw MP3 audio files from CoVo and TT into mono-channel waveforms, and downsample them to 16,000 Hz. For transcripts and translations, we normalize the punctuation, we tokenize the text with sacreMoses and lowercase it. For transcripts, we further remove all punctuation markers except for apostrophes. We use character vocabularies on all the tasks, with 100% coverage of all the characters. Preliminary experimentation showed that character vocabularies provided more stable training than BPE. For MT, the vocabulary is created jointly on both transcripts and translations. We extract 80-channel log-mel filterbank features, computed with a 25ms window size and 10ms window shift using torchaudio. The features are normalized to 0 mean and 1.0 standard deviation. We remove samples having more than 3,000 frames or more than 256 characters for GPU memory efficiency (less than 25 samples are removed for all languages). Baseline Results ::: Experimental Settings ::: Model Training Our ASR and ST models follow the architecture in berard2018end, but have 3 decoder layers like that in pino2019harnessing. For MT, we use a Transformer base architecture BIBREF15, but with 3 encoder layers, 3 decoder layers and 0.3 dropout. We use a batch size of 10,000 frames for ASR and ST, and a batch size of 4,000 tokens for MT. We train all models using Fairseq BIBREF20 for up to 200,000 updates. We use SpecAugment BIBREF21 for ASR and ST to alleviate overfitting. Baseline Results ::: Experimental Settings ::: Inference and Evaluation We use a beam size of 5 for all models. We use the best checkpoint by validation loss for MT, and average the last 5 checkpoints for ASR and ST. For MT and ST, we report case-insensitive tokenized BLEU BIBREF22 using sacreBLEU BIBREF23. For ASR, we report word error rate (WER) and character error rate (CER) using VizSeq. Baseline Results ::: Automatic Speech Recognition (ASR) For simplicity, we use the same model architecture for ASR and ST, although we do not leverage ASR models to pretrain ST model encoders later. Table TABREF18 shows the word error rate (WER) and character error rate (CER) for ASR models. We see that French and German perform the best given they are the two highest resource languages in CoVoST. The other languages are relatively low resource (especially Turkish and Swedish) and the ASR models are having difficulties to learn from this data. Baseline Results ::: Machine Translation (MT) MT models take transcripts (without punctuation) as inputs and outputs translations (with punctuation). For simplicity, we do not change the text preprocessing methods for MT to correct this mismatch. Moreover, this mismatch also exists in cascading ST systems, where MT model inputs are the outputs of an ASR model. Table TABREF20 shows the BLEU scores of MT models. We notice that the results are consistent with what we see from ASR models. For example thanks to abundant training data, French has a decent BLEU score of 29.8/25.4. German doesn't perform well, because of less richness of content (transcripts). The other languages are low resource in CoVoST and it is difficult to train decent models without additional data or pre-training techniques. Baseline Results ::: Speech Translation (ST) CoVoST is a many-to-one multilingual ST corpus. While end-to-end one-to-many and many-to-many multilingual ST models have been explored very recently BIBREF8, BIBREF9, many-to-one multilingual models, to our knowledge, have not. We hence use CoVoST to examine this setting. Table TABREF22 and TABREF23 show the BLEU scores for both bilingual and multilingual end-to-end ST models trained on CoVoST. We observe that combining speeches from multiple languages is consistently bringing gains to low-resource languages (all besides French and German). This includes combinations of distant languages, such as Ru+Fr, Tr+Fr and Zh+Fr. Moreover, some combinations do bring gains to high-resource language (French) as well: Es+Fr, Tr+Fr and Mn+Fr. We simply provide the most basic many-to-one multilingual baselines here, and leave the full exploration of the best configurations to future work. Finally, we note that for some language pairs, absolute BLEU numbers are relatively low as we restrict model training to the supervised data. We encourage the community to improve upon those baselines, for example by leveraging semi-supervised training. Baseline Results ::: Multi-Speaker Evaluation In CoVoST, large portion of transcripts are covered by multiple speakers with different genders, accents and age groups. Besides the standard corpus-level BLEU scores, we also want to evaluate model output variance on the same content (transcript) but different speakers. We hence propose to group samples (and their sentence BLEU scores) by transcript, and then calculate average per-group mean and average coefficient of variation defined as follows: and where $G$ is the set of sentence BLEU scores grouped by transcript and $G^{\prime } = \lbrace g | g\in G, |g|>1, \textrm {Mean}(g) > 0 \rbrace $. $\textrm {BLEU}_{MS}$ provides a normalized quality score as oppose to corpus-level BLEU or unnormalized average of sentence BLEU. And $\textrm {CoefVar}_{MS}$ is a standardized measure of model stability against different speakers (the lower the better). Table TABREF24 shows the $\textrm {BLEU}_{MS}$ and $\textrm {CoefVar}_{MS}$ of our ST models on CoVoST test set. We see that German and Persian have the worst $\textrm {CoefVar}_{MS}$ (least stable) given their rich speaker diversity in the test set and relatively small train set (see also Figure FIGREF6 and Table TABREF2). Dutch also has poor $\textrm {CoefVar}_{MS}$ because of the lack of training data. Multilingual models are consistantly more stable on low-resource languages. Ru+Fr, Tr+Fr, Fa+Fr and Zh+Fr even have better $\textrm {CoefVar}_{MS}$ than all individual languages. Conclusion We introduce a multilingual speech-to-text translation corpus, CoVoST, for 11 languages into English, diversified with over 11,000 speakers and over 60 accents. We also provide baseline results, including, to our knowledge, the first end-to-end many-to-one multilingual model for spoken language translation. CoVoST is free to use with a CC0 license, and the additional Tatoeba evaluation samples are also CC-licensed. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How is the quality of the data empirically evaluated? Only give me the answer and do not output any other words.
Answer:
[ "Validated transcripts were sent to professional translators., various sanity checks to the translations, sanity check the overlaps of train, development and test sets", "computed sentence-level BLEU, We manually inspected examples where the source transcript was identical to the translation, measured the perplexity of the translations, computed the ratio of English characters in the translations, calculate similarity scores between transcripts and translations" ]
276
3,465
single_doc_qa
942cb8a0b39b41e2adc48432a71f7201
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Explanations of happenings in one's life, causal explanations, are an important topic of study in social, psychological, economic, and behavioral sciences. For example, psychologists have analyzed people's causal explanatory style BIBREF0 and found strong negative relationships with depression, passivity, and hostility, as well as positive relationships with life satisfaction, quality of life, and length of life BIBREF1 , BIBREF2 , BIBREF0 . To help understand the significance of causal explanations, consider how they are applied to measuring optimism (and its converse, pessimism) BIBREF0 . For example, in “My parser failed because I always have bugs.”, the emphasized text span is considered a causal explanation which indicates pessimistic personality – a negative event where the author believes the cause is pervasive. However, in “My parser failed because I barely worked on the code.”, the explanation would be considered a signal of optimistic personality – a negative event for which the cause is believed to be short-lived. Language-based models which can detect causal explanations from everyday social media language can be used for more than automating optimism detection. Language-based assessments would enable other large-scale downstream tasks: tracking prevailing causal beliefs (e.g., about climate change or autism), better extracting process knowledge from non-fiction (e.g., gravity causes objects to move toward one another), or detecting attribution of blame or praise in product or service reviews (“I loved this restaurant because the fish was cooked to perfection”). In this paper, we introduce causal explanation analysis and its subtasks of detecting the presence of causality (causality prediction) and identifying explanatory phrases (causal explanation identification). There are many challenges to achieving these task. First, the ungrammatical texts in social media incur poor syntactic parsing results which drastically affect the performance of discourse relation parsing pipelines . Many causal relations are implicit and do not contain any discourse markers (e.g., `because'). Further, Explicit causal relations are also more difficult in social media due to the abundance of abbreviations and variations of discourse connectives (e.g., `cuz' and `bcuz'). Prevailing approaches for social media analyses, utilizing traditional linear models or bag of words models (e.g., SVM trained with n-gram, part-of-speech (POS) tags, or lexicon-based features) alone do not seem appropriate for this task since they simply cannot segment the text into meaningful discourse units or discourse arguments such as clauses or sentences rather than random consecutive token sequences or specific word tokens. Even when the discourse units are clear, parsers may still fail to accurately identify discourse relations since the content of social media is quite different than that of newswire which is typically used for discourse parsing. In order to overcome these difficulties of discourse relation parsing in social media, we simplify and minimize the use of syntactic parsing results and capture relations between discourse arguments, and investigate the use of a recursive neural network model (RNN). Recent work has shown that RNNs are effective for utilizing discourse structures for their downstream tasks BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , but they have yet to be directly used for discourse relation prediction in social media. We evaluated our model by comparing it to off-the-shelf end-to-end discourse relation parsers and traditional models. We found that the SVM and random forest classifiers work better than the LSTM classifier for the causality detection, while the LSTM classifier outperforms other models for identifying causal explanation. The contributions of this work include: (1) the proposal of models for both (a) causality prediction and (b) causal explanation identification, (2) the extensive evaluation of a variety of models from social media classification models and discourse relation parsers to RNN-based application models, demonstrating that feature-based models work best for causality prediction while RNNs are superior for the more difficult task of causal explanation identification, (3) performance analysis on architectural differences of the pipeline and the classifier structures, (4) exploration of the applications of causal explanation to downstream tasks, and (5) release of a novel, anonymized causality Facebook dataset along with our causality prediction and causal explanation identification models. Related Work Identifying causal explanations in documents can be viewed as discourse relation parsing. The Penn Discourse Treebank (PDTB) BIBREF7 has a `Cause' and `Pragmatic Cause' discourse type under a general `Contingency' class and Rhetorical Structure Theory (RST) BIBREF8 has a `Relations of Cause'. In most cases, the development of discourse parsers has taken place in-domain, where researchers have used the existing annotations of discourse arguments in newswire text (e.g. Wall Street Journal) from the discourse treebank and focused on exploring different features and optimizing various types of models for predicting relations BIBREF9 , BIBREF10 , BIBREF11 . In order to further develop automated systems, researchers have proposed end-to-end discourse relation parsers, building models which are trained and evaluated on the annotated PDTB and RST Discourse Treebank (RST DT). These corpora consist of documents from Wall Street Journal (WSJ) which are much more well-organized and grammatical than social media texts BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . Only a few works have attempted to parse discourse relations for out-of-domain problems such as text categorizations on social media texts; Ji and Bhatia used models which are pretrained with RST DT for building discourse structures from movie reviews, and Son adapted the PDTB discourse relation parsing approach for capturing counterfactual conditionals from tweets BIBREF4 , BIBREF3 , BIBREF16 . These works had substantial differences to what propose in this paper. First, Ji and Bhatia used a pretrained model (not fully optimal for some parts of the given task) in their pipeline; Ji's model performed worse than the baseline on the categorization of legislative bills, which is thought to be due to legislative discourse structures differing from those of the training set (WSJ corpus). Bhatia also used a pretrained model finding that utilizing discourse relation features did not boost accuracy BIBREF4 , BIBREF3 . Both Bhatia and Son used manual schemes which may limit the coverage of certain types of positive samples– Bhatia used a hand-crafted schema for weighting discourse structures for the neural network model and Son manually developed seven surface forms of counterfactual thinking for the rule-based system BIBREF4 , BIBREF16 . We use social-media-specific features from pretrained models which are directly trained on tweets and we avoid any hand-crafted rules except for those included in the existing discourse argument extraction techniques. The automated systems for discourse relation parsing involve multiple subtasks from segmenting the whole text into discourse arguments to classifying discourse relations between the arguments. Past research has found that different types of models and features yield varying performance for each subtask. Some have optimized models for discourse relation classification (i.e. given a document indicating if the relation existing) without discourse argument parsing using models such as Naive-Bayes or SVMs, achieve relatively stronger accuracies but a simpler task than that associated with discourse arguments BIBREF10 , BIBREF11 , BIBREF9 . Researchers who, instead, tried to build the end-to-end parsing pipelines considered a wider range of approaches including sequence models and RNNs BIBREF12 , BIBREF15 , BIBREF14 , BIBREF17 . Particularly, when they tried to utilize the discourse structures for out-domain applications, they used RNN-based models and found that those models are advantageous for their downstream tasks BIBREF4 , BIBREF3 . In our case, for identifying causal explanations from social media using discourse structure, we build an RNN-based model for its structural effectiveness in this task (see details in section UID13 ). However, we also note that simpler models such as SVMs and logistic regression obtained the state-of-the-art performances for text categorization tasks in social media BIBREF18 , BIBREF19 , so we build relatively simple models with different properties for each stage of the full pipeline of our parser. Methods We build our model based on PDTB-style discourse relation parsing since PDTB has a relatively simpler text segmentation method; for explicit discourse relations, it finds the presence of discourse connectives within a document and extracts discourse arguments which parametrize the connective while for implicit relations, it considers all adjacent sentences as candidate discourse arguments. Dataset We created our own causal explanation dataset by collecting 3,268 random Facebook status update messages. Three well-trained annotators manually labeled whether or not each message contains the causal explanation and obtained 1,598 causality messages with substantial agreement ( $\kappa =0.61$ ). We used the majority vote for our gold standard. Then, on each causality message, annotators identified which text spans are causal explanations. For each task, we used 80% of the dataset for training our model and 10% for tuning the hyperparameters of our models. Finally, we evaluated all of our models on the remaining 10% (Table 1 and Table 2 ). For causal explanation detection task, we extracted discourse arguments using our parser and selected discourse arguments which most cover the annotated causal explanation text span as our gold standard. Model We build two types of models. First, we develop feature-based models which utilize features of the successful models in social media analysis and causal relation discourse parsing. Then, we build a recursive neural network model which uses distributed representation of discourse arguments as this approach can even capture latent properties of causal relations which may exist between distant discourse arguments. We specifically selected bidirectional LSTM since the model with the discourse distributional structure built in this form outperformed the traditional models in similar NLP downstream tasks BIBREF3 . As the first step of our pipeline, we use Tweebo parser BIBREF20 to extract syntactic features from messages. Then, we demarcate sentences using punctuation (`,') tag and periods. Among those sentences, we find discourse connectives defined in PDTB annotation along with a Tweet POS tag for conjunction words which can also be a discourse marker. In order to decide whether these connectives are really discourse connectives (e.g., I went home, but he stayed) as opposed to simple connections of two words (I like apple and banana) we see if verb phrases exist before and after the connective by using dependency parsing results. Although discourse connective disambiguation is a complicated task which can be much improved by syntactic features BIBREF21 , we try to minimize effects of syntactic parsing and simplify it since it is highly error-prone in social media. Finally, according to visual inspection, emojis (`E' tag) are crucial for discourse relation in social media so we take them as separate discourse arguments (e.g.,in “My test result... :(” the sad feeling is caused by the test result, but it cannot be captured by plain word tokens). We trained a linear SVM, an rbf SVM, and a random forest with N-gram, charater N-gram, and tweet POS tags, sentiment tags, average word lengths and word counts from each message as they have a pivotal role in the models for many NLP downstream tasks in social media BIBREF19 , BIBREF18 . In addition to these features, we also extracted First-Last, First3 features and Word Pairs from every adjacent pair of discourse arguments since these features were most helpful for causal relation prediction BIBREF9 . First-Last, First3 features are first and last word and first three words of two discourse arguments of the relation, and Word Pairs are the cross product of words of those discourse arguments. These two features enable our model to capture interaction between two discourse arguments. BIBREF9 reported that these two features along with verbs, modality, context, and polarity (which can be captured by N-grams, sentiment tags and POS tags in our previous features) obtained the best performance for predicting Contingency class to which causality belongs. We load the GLOVE word embedding BIBREF22 trained in Twitter for each token of extracted discourse arguments from messages. For the distributional representation of discourse arguments, we run a Word-level LSTM on the words' embeddings within each discourse argument and concatenate last hidden state vectors of forward LSTM ( $\overrightarrow{h}$ ) and backward LSTM ( $\overleftarrow{h}$ ) which is suggested by BIBREF3 ( $DA = [\overrightarrow{h};\overleftarrow{h}]$ ). Then, we feed the sequence of the vector representation of discourse arguments to the Discourse-argument-level LSTM (DA-level LSTM) to make a final prediction with log softmax function. With this structure, the model can learn the representation of interaction of tokens inside each discourse argument, then capture discourse relations across all of the discourse arguments in each message (Figure 2 ). In order to prevent the overfitting, we added a dropout layer between the Word-level LSTM and the DA-level LSTM layer. We also explore subsets of the full RNN architecture, specifically with one of the two LSTM layers removed. In the first model variant, we directly input all word embeddings of a whole message to a BiLSTM layer and make prediction (Word LSTM) without the help of the distributional vector representations of discourse arguments. In the second model variant, we take the average of all word embeddings of each discourse argument ( $DA_k=\frac{1}{N_k} \sum _{i=1}^{N_k}W_{i}$ ), and use them as inputs to a BiLSTM layer (DA AVG LSTM) as the average vector of embeddings were quite effective for representing the whole sequence BIBREF3 , BIBREF5 . As with the full architectures, for CP both of these variants ends with a many-to-one classification per message, while the CEI model ends with a sequence of classifications. Experiment We explored three types of models (RBF SVM, Linear SVM, and Random Forest Classifier) which have previously been shown empirically useful for the language analysis in social media. We filtered out low frequency Word Pairs features as they tend to be noisy and sparse BIBREF9 . Then, we conducted univariate feature selection to restrict all remaining features to those showing at least a small relationship with the outcome. Specifically, we keep all features passing a family-wise error rate of $\alpha = 60$ with the given outcome. After comparing the performance of the optimized version of each model, we also conducted a feature ablation test on the best model in order to see how much each feature contributes to the causality prediction. We used bidirectional LSTMs for causality classification and causal explanation identification since the discourse arguments for causal explanation can show up either before and after the effected events or results and we want our model to be optimized for both cases. However, there is a risk of overfitting due to the dataset which is relatively small for the high complexity of the model, so we added a dropout layer (p=0.3) between the Word-level LSTM and the DA-level LSTM. For tuning our model, we explore the dimensionality of word vector and LSTM hidden state vectors of discourse arguments of 25, 50, 100, and 200 as pretrained GLOVE vectors were trained in this setting. For optimization, we used Stochastic Gradient Descent (SGD) and Adam BIBREF23 with learning rates 0.01 and 0.001. We ignore missing word embeddings because our dataset is quite small for retraining new word embeddings. However, if embeddings are extracted as separate discourse arguments, we used the average of all vectors of all discourse arguments in that message. Average embeddings have performed well for representing text sequences in other tasks BIBREF5 . We first use state-of-the-art PDTB taggers for our baseline BIBREF13 , BIBREF12 for the evaluation of the causality prediction of our models ( BIBREF12 requires sentences extracted from the text as its input, so we used our parser to extract sentences from the message). Then, we compare how models work for each task and disassembled them to inspect how each part of the models can affect their final prediction performances. We conducted McNemar's test to determine whether the performance differences are statistically significant at $p < .05$ . Results We investigated various models for both causality detection and explanation identification. Based on their performances on the task, we analyzed the relationships between the types of models and the tasks, and scrutinized further for the best performing models. For performance analysis, we reported weighted F1 of classes. Causality Prediction In order to classify whether a message contains causal relation, we compared off-the-shelf PDTB parsers, linear SVM, RBF SVM, Random forest and LSTM classifiers. The off-the-shelf parsers achieved the lowest accuracies ( BIBREF12 and BIBREF13 in Table 3 ). This result can be expected since 1) these models were trained with news articles and 2) they are trained for all possible discourse relations in addition to causal relations (e.g., contrast, condition, etc). Among our suggested models, SVM and random forest classifier performed better than LSTM and, in the general trend, the more complex the models were, the worse they performed. This suggests that the models with more direct and simpler learning methods with features might classify the causality messages better than the ones more optimized for capturing distributional information or non-linear relationships of features. Table 4 shows the results of a feature ablation test to see how each feature contributes to causality classification performance of the linear SVM classifier. POS tags caused the largest drop in F1. We suspect POS tags played a unique role because discourse connectives can have various surface forms (e.g., because, cuz, bcuz, etc) but still the same POS tag `P'. Also POS tags can capture the occurrences of modal verbs, a feature previously found to be very useful for detecting similar discourse relations BIBREF9 . N-gram features caused 0.022 F1 drop while sentiment tags did not affect the model when removed. Unlike the previous work where First-Last, First3 and Word pairs tended to gain a large F1 increase for multiclass discourse relation prediction, in our case, they did not affect the prediction performance compared to other feature types such as POS tags or N-grams. Causal Explanation Identification In this task, the model identifies causal explanations given the discourse arguments of the causality message. We explored over the same models as those we used for causality (sans the output layer), and found the almost opposite trend of performances (see Table 5 ). The Linear SVM obtained lowest F1 while the LSTM model made the best identification performance. As opposed to the simple binary classification of the causality messages, in order to detect causal explanation, it is more beneficial to consider the relation across discourse arguments of the whole message and implicit distributional representation due to the implicit causal relations between two distant arguments. Architectural Variants For causality prediction, we experimented with only word tokens in the whole message without help of Word-level LSTM layer (Word LSTM), and F1 dropped by 0.064 (CP in Table 6 ). Also, when we used the average of the sequence of word embeddings of each discourse argument as an input to the DA-level LSTM and it caused F1 drop of 0.073. This suggests that the information gained from both the interaction of words in and in between discourse arguments help when the model utilizes the distributional representation of the texts. For causal explanation identification, in order to test how the LSTM classifier works without its capability of capturing the relations between discourse arguments, we removed DA-level LSTM layer and ran the LSTM directly on the word embedding sequence for each discourse argument for classifying whether the argument is causal explanation, and the model had 0.061 F1 drop (Word LSTM in CEI in Table 6 ). Also, when we ran DA-level LSTM on the average vectors of the word sequences of each discourse argument of messages, F1 decreased to 0.818. This follows the similar pattern observed from other types of models performances (i.e., SVMs and Random Forest classifiers) that the models with higher complexity for capturing the interaction of discourse arguments tend to identify causal explanation with the higher accuracies. For CEI task, we found that when the model ran on the sequence representation of discourse argument (DA AVG LSTM), its performance was higher than the plain sequence of word embeddings (Word LSTM). Finally, in both subtasks, when the models ran on both Word-level and DA-Level (Full LSTM), they obtained the highest performance. Complete Pipeline Evaluations thus far zeroed-in on each subtask of causal explanation analysis (i.e. CEI only focused on data already identified to contain causal explanations). Here, we seek to evaluate the complete pipeline of CP and CEI, starting from all of test data (those or without causality) and evaluating the final accuracy of CEI predictions. This is intended to evaluate CEI performance under an applied setting where one does not already know whether a document has a causal explanation. There are several approaches we could take to perform CEI starting from unannotated data. We could simply run CEI prediction by itself (CEI Only) or the pipeline of CP first and then only run CEI on documents predicted as causal (CP + CEI). Further, the CEI model could be trained only on those documents annotated causal (as was done in the previous experiments) or on all training documents including many that are not causal. Table 7 show results varying the pipeline and how CEI was trained. Though all setups performed decent ( $F1 > 0.81$ ) we see that the pipelined approach, first predicting causality (with the linear SVM) and then predicting causal explanations only for those with marked causal (CP + CEI $_{causal}$ ) yielded the strongest results. This also utilized the CEI model only trained on those annotated causal. Besides performance, an added benefit from this two step approach is that the CP step is less computational intensive of the CEI step and approximately 2/3 of documents will never need the CEI step applied. We had an inevitable limitation on the size of our dataset, since there is no other causality dataset over social media and the annotation required an intensive iterative process. This might have limited performances of more complex models, but considering the processing time and the computation load, the combination of the linear model and the RNN-based model of our pipeline obtained both the high performance and efficiency for the practical applications to downstream tasks. In other words, it's possible the linear model will not perform as well if the training size is increased substantially. However, a linear model could still be used to do a first-pass, computationally efficient labeling, in order to shortlist social media posts for further labeling from an LSTM or more complex model. Exploration Here, we explore the use of causal explanation analysis for downstream tasks. First we look at the relationship between use of causal explanation and one's demographics: age and gender. Then, we consider their use in sentiment analysis for extracting the causes of polarity ratings. Research involving human subjects was approved by the University of Pennsylvania Institutional Review Board. Conclusion We developed a pipeline for causal explanation analysis over social media text, including both causality prediction and causal explanation identification. We examined a variety of model types and RNN architectures for each part of the pipeline, finding an SVM best for causality prediction and a hierarchy of BiLSTMs for causal explanation identification, suggesting the later task relies more heavily on sequential information. In fact, we found replacing either layer of the hierarchical LSTM architecture (the word-level or the DA-level) with a an equivalent “bag of features” approach resulted in reduced accuracy. Results of our whole pipeline of causal explanation analysis were found quite strong, achieving an $F1=0.868$ at identifying discourse arguments that are causal explanations. Finally, we demonstrated use of our models in applications, finding associations between demographics and rate of mentioning causal explanations, as well as showing differences in the top words predictive of negative ratings in Yelp reviews. Utilization of discourse structure in social media analysis has been a largely untapped area of exploration, perhaps due to its perceived difficulty. We hope the strong results of causal explanation identification here leads to the integration of more syntax and deeper semantics into social media analyses and ultimately enables new applications beyond the current state of the art. Acknowledgments This work was supported, in part, by a grant from the Templeton Religion Trust (ID #TRT0048). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. We also thank Laura Smith, Yiyi Chen, Greta Jawel and Vanessa Hernandez for their work in identifying causal explanations. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What types of social media did they consider? Only give me the answer and do not output any other words.
Answer:
[ "Facebook status update messages", "Facebook status update messages" ]
276
5,117
single_doc_qa
a2d6542c4b554a61be96919110bfd631
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Privacy policies are the documents which disclose the ways in which a company gathers, uses, shares and manages a user's data. As legal documents, they function using the principle of notice and choice BIBREF0, where companies post their policies, and theoretically, users read the policies and decide to use a company's products or services only if they find the conditions outlined in its privacy policy acceptable. Many legal jurisdictions around the world accept this framework, including the United States and the European Union BIBREF1, BIBREF2. However, the legitimacy of this framework depends upon users actually reading and understanding privacy policies to determine whether company practices are acceptable to them BIBREF3. In practice this is seldom the case BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10. This is further complicated by the highly individual and nuanced compromises that users are willing to make with their data BIBREF11, discouraging a `one-size-fits-all' approach to notice of data practices in privacy documents. With devices constantly monitoring our environment, including our personal space and our bodies, lack of awareness of how our data is being used easily leads to problematic situations where users are outraged by information misuse, but companies insist that users have consented. The discovery of increasingly egregious uses of data by companies, such as the scandals involving Facebook and Cambridge Analytica BIBREF12, have further brought public attention to the privacy concerns of the internet and ubiquitous computing. This makes privacy a well-motivated application domain for NLP researchers, where advances in enabling users to quickly identify the privacy issues most salient to them can potentially have large real-world impact. [1]https://play.google.com/store/apps/details?id=com.gotokeep.keep.intl [2]https://play.google.com/store/apps/details?id=com.viber.voip [3]A question might not have any supporting evidence for an answer within the privacy policy. Motivated by this need, we contribute PrivacyQA, a corpus consisting of 1750 questions about the contents of privacy policies, paired with over 3500 expert annotations. The goal of this effort is to kickstart the development of question-answering methods for this domain, to address the (unrealistic) expectation that a large population should be reading many policies per day. In doing so, we identify several understudied challenges to our ability to answer these questions, with broad implications for systems seeking to serve users' information-seeking intent. By releasing this resource, we hope to provide an impetus to develop systems capable of language understanding in this increasingly important domain. Related Work Prior work has aimed to make privacy policies easier to understand. Prescriptive approaches towards communicating privacy information BIBREF21, BIBREF22, BIBREF23 have not been widely adopted by industry. Recently, there have been significant research effort devoted to understanding privacy policies by leveraging NLP techniques BIBREF24, BIBREF25, BIBREF26, BIBREF27, BIBREF28, especially by identifying specific data practices within a privacy policy. We adopt a personalized approach to understanding privacy policies, that allows users to query a document and selectively explore content salient to them. Most similar is the PolisisQA corpus BIBREF29, which examines questions users ask corporations on Twitter. Our approach differs in several ways: 1) The PrivacyQA dataset is larger, containing 10x as many questions and answers. 2) Answers are formulated by domain experts with legal training. 3) PrivacyQA includes diverse question types, including unanswerable and subjective questions. Our work is also related to reading comprehension in the open domain, which is frequently based upon Wikipedia passages BIBREF16, BIBREF17, BIBREF15, BIBREF30 and news articles BIBREF20, BIBREF31, BIBREF32. Table.TABREF4 presents the desirable attributes our dataset shares with past approaches. This work is also tied into research in applying NLP approaches to legal documents BIBREF33, BIBREF34, BIBREF35, BIBREF36, BIBREF37, BIBREF38, BIBREF39. While privacy policies have legal implications, their intended audience consists of the general public rather than individuals with legal expertise. This arrangement is problematic because the entities that write privacy policies often have different goals than the audience. feng2015applying, tan-EtAl:2016:P16-1 examine question answering in the insurance domain, another specialized domain similar to privacy, where the intended audience is the general public. Data Collection We describe the data collection methodology used to construct PrivacyQA. With the goal of achieving broad coverage across application types, we collect privacy policies from 35 mobile applications representing a number of different categories in the Google Play Store. One of our goals is to include both policies from well-known applications, which are likely to have carefully-constructed privacy policies, and lesser-known applications with smaller install bases, whose policies might be considerably less sophisticated. Thus, setting 5 million installs as a threshold, we ensure each category includes applications with installs on both sides of this threshold. All policies included in the corpus are in English, and were collected before April 1, 2018, predating many companies' GDPR-focused BIBREF41 updates. We leave it to future studies BIBREF42 to look at the impact of the GDPR (e.g., to what extent GDPR requirements contribute to making it possible to provide users with more informative answers, and to what extent their disclosures continue to omit issues that matter to users). Data Collection ::: Crowdsourced Question Elicitation The intended audience for privacy policies consists of the general public. This informs the decision to elicit questions from crowdworkers on the contents of privacy policies. We choose not to show the contents of privacy policies to crowdworkers, a procedure motivated by a desire to avoid inadvertent biases BIBREF43, BIBREF44, BIBREF45, BIBREF46, BIBREF47, and encourage crowdworkers to ask a variety of questions beyond only asking questions based on practices described in the document. Instead, crowdworkers are presented with public information about a mobile application available on the Google Play Store including its name, description and navigable screenshots. Figure FIGREF9 shows an example of our user interface. Crowdworkers are asked to imagine they have access to a trusted third-party privacy assistant, to whom they can ask any privacy question about a given mobile application. We use the Amazon Mechanical Turk platform and recruit crowdworkers who have been conferred “master” status and are located within the United States of America. Turkers are asked to provide five questions per mobile application, and are paid $2 per assignment, taking ~eight minutes to complete the task. Data Collection ::: Answer Selection To identify legally sound answers, we recruit seven experts with legal training to construct answers to Turker questions. Experts identify relevant evidence within the privacy policy, as well as provide meta-annotation on the question's relevance, subjectivity, OPP-115 category BIBREF49, and how likely any privacy policy is to contain the answer to the question asked. Data Collection ::: Analysis Table.TABREF17 presents aggregate statistics of the PrivacyQA dataset. 1750 questions are posed to our imaginary privacy assistant over 35 mobile applications and their associated privacy documents. As an initial step, we formulate the problem of answering user questions as an extractive sentence selection task, ignoring for now background knowledge, statistical data and legal expertise that could otherwise be brought to bear. The dataset is partitioned into a training set featuring 27 mobile applications and 1350 questions, and a test set consisting of 400 questions over 8 policy documents. This ensures that documents in training and test splits are mutually exclusive. Every question is answered by at least one expert. In addition, in order to estimate annotation reliability and provide for better evaluation, every question in the test set is answered by at least two additional experts. Table TABREF14 describes the distribution over first words of questions posed by crowdworkers. We also observe low redundancy in the questions posed by crowdworkers over each policy, with each policy receiving ~49.94 unique questions despite crowdworkers independently posing questions. Questions are on average 8.4 words long. As declining to answer a question can be a legally sound response but is seldom practically useful, answers to questions where a minority of experts abstain to answer are filtered from the dataset. Privacy policies are ~3000 words long on average. The answers to the question asked by the users typically have ~100 words of evidence in the privacy policy document. Data Collection ::: Analysis ::: Categories of Questions Questions are organized under nine categories from the OPP-115 Corpus annotation scheme BIBREF49: First Party Collection/Use: What, why and how information is collected by the service provider Third Party Sharing/Collection: What, why and how information shared with or collected by third parties Data Security: Protection measures for user information Data Retention: How long user information will be stored User Choice/Control: Control options available to users User Access, Edit and Deletion: If/how users can access, edit or delete information Policy Change: Informing users if policy information has been changed International and Specific Audiences: Practices pertaining to a specific group of users Other: General text, contact information or practices not covered by other categories. For each question, domain experts indicate one or more relevant OPP-115 categories. We mark a category as relevant to a question if it is identified as such by at least two annotators. If no such category exists, the category is marked as `Other' if atleast one annotator has identified the `Other' category to be relevant. If neither of these conditions is satisfied, we label the question as having no agreement. The distribution of questions in the corpus across OPP-115 categories is as shown in Table.TABREF16. First party and third party related questions are the largest categories, forming nearly 66.4% of all questions asked to the privacy assistant. Data Collection ::: Analysis ::: Answer Validation When do experts disagree? We would like to analyze the reasons for potential disagreement on the annotation task, to ensure disagreements arise due to valid differences in opinion rather than lack of adequate specification in annotation guidelines. It is important to note that the annotators are experts rather than crowdworkers. Accordingly, their judgements can be considered valid, legally-informed opinions even when their perspectives differ. For the sake of this question we randomly sample 100 instances in the test data and analyze them for likely reasons for disagreements. We consider a disagreement to have occurred when more than one expert does not agree with the majority consensus. By disagreement we mean there is no overlap between the text identified as relevant by one expert and another. We find that the annotators agree on the answer for 74% of the questions, even if the supporting evidence they identify is not identical i.e full overlap. They disagree on the remaining 26%. Sources of apparent disagreement correspond to situations when different experts: have differing interpretations of question intent (11%) (for example, when a user asks 'who can contact me through the app', the questions admits multiple interpretations, including seeking information about the features of the app, asking about first party collection/use of data or asking about third party collection/use of data), identify different sources of evidence for questions that ask if a practice is performed or not (4%), have differing interpretations of policy content (3%), identify a partial answer to a question in the privacy policy (2%) (for example, when the user asks `who is allowed to use the app' a majority of our annotators decline to answer, but the remaining annotators highlight partial evidence in the privacy policy which states that children under the age of 13 are not allowed to use the app), and other legitimate sources of disagreement (6%) which include personal subjective views of the annotators (for example, when the user asks `is my DNA information used in any way other than what is specified', some experts consider the boilerplate text of the privacy policy which states that it abides to practices described in the policy document as sufficient evidence to answer this question, whereas others do not). Experimental Setup We evaluate the ability of machine learning methods to identify relevant evidence for questions in the privacy domain. We establish baselines for the subtask of deciding on the answerability (§SECREF33) of a question, as well as the overall task of identifying evidence for questions from policies (§SECREF37). We describe aspects of the question that can render it unanswerable within the privacy domain (§SECREF41). Experimental Setup ::: Answerability Identification Baselines We define answerability identification as a binary classification task, evaluating model ability to predict if a question can be answered, given a question in isolation. This can serve as a prior for downstream question-answering. We describe three baselines on the answerability task, and find they considerably improve performance over a majority-class baseline. SVM: We define 3 sets of features to characterize each question. The first is a simple bag-of-words set of features over the question (SVM-BOW), the second is bag-of-words features of the question as well as length of the question in words (SVM-BOW + LEN), and lastly we extract bag-of-words features, length of the question in words as well as part-of-speech tags for the question (SVM-BOW + LEN + POS). This results in vectors of 200, 201 and 228 dimensions respectively, which are provided to an SVM with a linear kernel. CNN: We utilize a CNN neural encoder for answerability prediction. We use GloVe word embeddings BIBREF50, and a filter size of 5 with 64 filters to encode questions. BERT: BERT BIBREF51 is a bidirectional transformer-based language-model BIBREF52. We fine-tune BERT-base on our binary answerability identification task with a learning rate of 2e-5 for 3 epochs, with a maximum sequence length of 128. Experimental Setup ::: Privacy Question Answering Our goal is to identify evidence within a privacy policy for questions asked by a user. This is framed as an answer sentence selection task, where models identify a set of evidence sentences from all candidate sentences in each policy. Experimental Setup ::: Privacy Question Answering ::: Evaluation Metric Our evaluation metric for answer-sentence selection is sentence-level F1, implemented similar to BIBREF30, BIBREF16. Precision and recall are implemented by measuring the overlap between predicted sentences and sets of gold-reference sentences. We report the average of the maximum F1 from each n$-$1 subset, in relation to the heldout reference. Experimental Setup ::: Privacy Question Answering ::: Baselines We describe baselines on this task, including a human performance baseline. No-Answer Baseline (NA) : Most of the questions we receive are difficult to answer in a legally-sound way on the basis of information present in the privacy policy. We establish a simple baseline to quantify the effect of identifying every question as unanswerable. Word Count Baseline : To quantify the effect of using simple lexical matching to answer the questions, we retrieve the top candidate policy sentences for each question using a word count baseline BIBREF53, which counts the number of question words that also appear in a sentence. We include the top 2, 3 and 5 candidates as baselines. BERT: We implement two BERT-based baselines BIBREF51 for evidence identification. First, we train BERT on each query-policy sentence pair as a binary classification task to identify if the sentence is evidence for the question or not (Bert). We also experiment with a two-stage classifier, where we separately train the model on questions only to predict answerability. At inference time, if the answerable classifier predicts the question is answerable, the evidence identification classifier produces a set of candidate sentences (Bert + Unanswerable). Human Performance: We pick each reference answer provided by an annotator, and compute the F1 with respect to the remaining references, as described in section 4.2.1. Each reference answer is treated as the prediction, and the remaining n-1 answers are treated as the gold reference. The average of the maximum F1 across all reference answers is computed as the human baseline. Results and Discussion The results of the answerability baselines are presented in Table TABREF31, and on answer sentence selection in Table TABREF32. We observe that bert exhibits the best performance on a binary answerability identification task. However, most baselines considerably exceed the performance of a majority-class baseline. This suggests considerable information in the question, indicating it's possible answerability within this domain. Table.TABREF32 describes the performance of our baselines on the answer sentence selection task. The No-answer (NA) baseline performs at 28 F1, providing a lower bound on performance at this task. We observe that our best-performing baseline, Bert + Unanswerable achieves an F1 of 39.8. This suggest that bert is capable of making some progress towards answering questions in this difficult domain, while still leaving considerable headroom for improvement to reach human performance. Bert + Unanswerable performance suggests that incorporating information about answerability can help in this difficult domain. We examine this challenging phenomena of unanswerability further in Section . Results and Discussion ::: Error Analysis Disagreements are analyzed based on the OPP-115 categories of each question (Table.TABREF34). We compare our best performing BERT variant against the NA model and human performance. We observe significant room for improvement across all categories of questions but especially for first party, third party and data retention categories. We analyze the performance of our strongest BERT variant, to identify classes of errors and directions for future improvement (Table.8). We observe that a majority of answerability mistakes made by the BERT model are questions which are in fact answerable, but are identified as unanswerable by BERT. We observe that BERT makes 124 such mistakes on the test set. We collect expert judgments on relevance, subjectivity , silence and information about how likely the question is to be answered from the privacy policy from our experts. We find that most of these mistakes are relevant questions. However many of them were identified as subjective by the annotators, and at least one annotator marked 19 of these questions as having no answer within the privacy policy. However, only 6 of these questions were unexpected or do not usually have an answer in privacy policies. These findings suggest that a more nuanced understanding of answerability might help improve model performance in his challenging domain. Results and Discussion ::: What makes Questions Unanswerable? We further ask legal experts to identify potential causes of unanswerability of questions. This analysis has considerable implications. While past work BIBREF17 has treated unanswerable questions as homogeneous, a question answering system might wish to have different treatments for different categories of `unanswerable' questions. The following factors were identified to play a role in unanswerability: Incomprehensibility: If a question is incomprehensible to the extent that its meaning is not intelligible. Relevance: Is this question in the scope of what could be answered by reading the privacy policy. Ill-formedness: Is this question ambiguous or vague. An ambiguous statement will typically contain expressions that can refer to multiple potential explanations, whereas a vague statement carries a concept with an unclear or soft definition. Silence: Other policies answer this type of question but this one does not. Atypicality: The question is of a nature such that it is unlikely for any policy policy to have an answer to the question. Our experts attempt to identify the different `unanswerable' factors for all 573 such questions in the corpus. 4.18% of the questions were identified as being incomprehensible (for example, `any difficulties to occupy the privacy assistant'). Amongst the comprehendable questions, 50% were identified as likely to have an answer within the privacy policy, 33.1% were identified as being privacy-related questions but not within the scope of a privacy policy (e.g., 'has Viber had any privacy breaches in the past?') and 16.9% of questions were identified as completely out-of-scope (e.g., `'will the app consume much space?'). In the questions identified as relevant, 32% were ill-formed questions that were phrased by the user in a manner considered vague or ambiguous. Of the questions that were both relevant as well as `well-formed', 95.7% of the questions were not answered by the policy in question but it was reasonable to expect that a privacy policy would contain an answer. The remaining 4.3% were described as reasonable questions, but of a nature generally not discussed in privacy policies. This suggests that the answerability of questions over privacy policies is a complex issue, and future systems should consider each of these factors when serving user's information seeking intent. We examine a large-scale dataset of “natural” unanswerable questions BIBREF54 based on real user search engine queries to identify if similar unanswerability factors exist. It is important to note that these questions have previously been filtered, according to a criteria for bad questions defined as “(questions that are) ambiguous, incomprehensible, dependent on clear false presuppositions, opinion-seeking, or not clearly a request for factual information.” Annotators made the decision based on the content of the question without viewing the equivalent Wikipedia page. We randomly sample 100 questions from the development set which were identified as unanswerable, and find that 20% of the questions are not questions (e.g., “all I want for christmas is you mariah carey tour”). 12% of questions are unlikely to ever contain an answer on Wikipedia, corresponding closely to our atypicality category. 3% of questions are unlikely to have an answer anywhere (e.g., `what guides Santa home after he has delivered presents?'). 7% of questions are incomplete or open-ended (e.g., `the south west wind blows across nigeria between'). 3% of questions have an unresolvable coreference (e.g., `how do i get to Warsaw Missouri from here'). 4% of questions are vague, and a further 7% have unknown sources of error. 2% still contain false presuppositions (e.g., `what is the only fruit that does not have seeds?') and the remaining 42% do not have an answer within the document. This reinforces our belief that though they have been understudied in past work, any question answering system interacting with real users should expect to receive such unanticipated and unanswerable questions. Conclusion We present PrivacyQA, the first significant corpus of privacy policy questions and more than 3500 expert annotations of relevant answers. The goal of this work is to promote question-answering research in the specialized privacy domain, where it can have large real-world impact. Strong neural baselines on PrivacyQA achieve a performance of only 39.8 F1 on this corpus, indicating considerable room for future research. Further, we shed light on several important considerations that affect the answerability of questions. We hope this contribution leads to multidisciplinary efforts to precisely understand user intent and reconcile it with information in policy documents, from both the privacy and NLP communities. Acknowledgements This research was supported in part by grants from the National Science Foundation Secure and Trustworthy Computing program (CNS-1330596, CNS-1330214, CNS-15-13957, CNS-1801316, CNS-1914486, CNS-1914444) and a DARPA Brandeis grant on Personalized Privacy Assistants (FA8750-15-2-0277). The US Government is authorized to reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the NSF, DARPA, or the US Government. The authors would like to extend their gratitude to Elias Wright, Gian Mascioli, Kiara Pillay, Harrison Kay, Eliel Talo, Alexander Fagella and N. Cameron Russell for providing their valuable expertise and insight to this effort. The authors are also grateful to Eduard Hovy, Lorrie Cranor, Florian Schaub, Joel Reidenberg, Aditya Potukuchi and Igor Shalyminov for helpful discussions related to this work, and to the three anonymous reviewers of this draft for their constructive feedback. Finally, the authors would like to thank all crowdworkers who consented to participate in this study. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Who were the experts used for annotation? Only give me the answer and do not output any other words.
Answer:
[ "Individuals with legal training", "Yes" ]
276
5,104
single_doc_qa
a03c6089f8aa44c896983ca4e0ce539d
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Word embeddings are representations of words in numerical form, as vectors of typically several hundred dimensions. The vectors are used as an input to machine learning models; for complex language processing tasks these are typically deep neural networks. The embedding vectors are obtained from specialized learning tasks, based on neural networks, e.g., word2vec BIBREF0, GloVe BIBREF1, FastText BIBREF2, ELMo BIBREF3, and BERT BIBREF4. For training, the embeddings algorithms use large monolingual corpora that encode important information about word meaning as distances between vectors. In order to enable downstream machine learning on text understanding tasks, the embeddings shall preserve semantic relations between words, and this is true even across languages. Probably the best known word embeddings are produced by the word2vec method BIBREF5. The problem with word2vec embeddings is their failure to express polysemous words. During training of an embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all words' meanings. Consequently, rare meanings of words are poorly expressed with word2vec and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science. The idea of contextual embeddings is to generate a different vector for each context a word appears in and the context is typically defined sentence-wise. To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans and so it is for the learning algorithms. In this work, we describe high-quality models for contextual embeddings, called ELMo BIBREF3, precomputed for seven morphologically rich, less-resourced languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian, and Swedish. ELMo is one of the most successful approaches to contextual word embeddings. At time of its creation, ELMo has been shown to outperform previous word embeddings BIBREF3 like word2vec and GloVe on many NLP tasks, e.g., question answering, named entity extraction, sentiment analysis, textual entailment, semantic role labeling, and coreference resolution. This report is split into further five sections. In section SECREF2, we describe the contextual embeddings ELMo. In Section SECREF3, we describe the datasets used and in Section SECREF4 we describe preprocessing and training of the embeddings. We describe the methodology for evaluation of created vectors and results in Section SECREF5. We present conclusion in Section SECREF6 where we also outline plans for further work. ELMo Typical word embeddings models or representations, such as word2vec BIBREF0, GloVe BIBREF1, or FastText BIBREF2, are fast to train and have been pre-trained for a number of different languages. They do not capture the context, though, so each word is always given the same vector, regardless of its context or meaning. This is especially problematic for polysemous words. ELMo (Embeddings from Language Models) embedding BIBREF3 is one of the state-of-the-art pretrained transfer learning models, that remedies the problem and introduces a contextual component. ELMo model`s architecture consists of three neural network layers. The output of the model after each layer gives one set of embeddings, altogether three sets. The first layer is a CNN layer, which operates on a character level. It is context independent, so each word always gets the same embedding, regardless of its context. It is followed by two biLM layers. A biLM layer consists of two concatenated LSTMs. In the first LSTM, we try to predict the following word, based on the given past words, where each word is represented by the embeddings from the CNN layer. In the second LSTM, we try to predict the preceding word, based on the given following words. It is equivalent to the first LSTM, just reading the text in reverse. In NLP tasks, any set of these embeddings may be used; however, a weighted average is usually used. The weights of the average are learned during the training of the model for the specific task. Additionally, an entire ELMo model can be fine-tuned on a specific end task. Although ELMo is trained on character level and is able to handle out-of-vocabulary words, a vocabulary file containing most common tokens is used for efficiency during training and embedding generation. The original ELMo model was trained on a one billion word large English corpus, with a given vocabulary file of about 800,000 words. Later, ELMo models for other languages were trained as well, but limited to larger languages with many resources, like German and Japanese. ELMo ::: ELMoForManyLangs Recently, ELMoForManyLangs BIBREF6 project released pre-trained ELMo models for a number of different languages BIBREF7. These models, however, were trained on a significantly smaller datasets. They used 20-million-words data randomly sampled from the raw text released by the CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, which is a combination of Wikipedia dump and common crawl. The quality of these models is questionable. For example, we compared the Latvian model by ELMoForManyLangs with a model we trained on a complete (wikidump + common crawl) Latvian corpus, which has about 280 million tokens. The difference of each model on the word analogy task is shown in Figure FIGREF16 in Section SECREF5. As the results of the ELMoForManyLangs embeddings are significantly worse than using the full corpus, we can conclude that these embeddings are not of sufficient quality. For that reason, we computed ELMo embeddings for seven languages on much larger corpora. As this effort requires access to large amount of textual data and considerable computational resources, we made the precomputed models publicly available by depositing them to Clarin repository. Training Data We trained ELMo models for seven languages: Slovenian, Croatian, Finnish, Estonian, Latvian, Lithuanian and Swedish. To obtain high-quality embeddings, we used large monolingual corpora from various sources for each language. Some corpora are available online under permissive licences, others are available only for research purposes or have limited availability. The corpora used in training datasets are a mix of news articles and general web crawl, which we preprocessed and deduplicated. Below we shortly describe the used corpora in alphabetical order of the involved languages. Their names and sizes are summarized in Table TABREF3. Croatian dataset include hrWaC 2.1 corpus BIBREF9, Riznica BIBREF10, and articles of Croatian branch of Styria media house, made available to us through partnership in a joint project. hrWaC was built by crawling the .hr internet domain in 2011 and 2014. Riznica is composed of Croatian fiction and non-fiction prose, poetry, drama, textbooks, manuals, etc. The Styria dataset consists of 570,219 news articles published on the Croatian 24sata news portal and niche portals related to 24sata. Estonian dataset contains texts from two sources, CoNLL 2017 Shared Task - Automatically Annotated Raw Texts and Word Embeddings BIBREF8, and news articles made available to us by Ekspress Meedia due to partnership in the project. Ekspress Meedia dataset is composed of Estonian news articles between years 2009 and 2019. The CoNLL 2017 corpus is composed of Estonian Wikipedia and webcrawl. Finnish dataset contains articles by Finnish news agency STT, Finnish part of the CoNLL 2017 dataset, and Ylilauta downloadable version BIBREF11. STT news articles were published between years 1992 and 2018. Ylilauta is a Finnish online discussion board; the corpus contains parts of the discussions from 2012 to 2014. Latvian dataset consists only of the Latvian portion of the ConLL 2017 corpus. Lithuanian dataset is composed of Lithuanian Wikipedia articles from 2018, DGT-UD corpus, and LtTenTen. DGT-UD is a parallel corpus of 23 official languages of the EU, composed of JRC DGT translation memory of European law, automatically annotated with UD-Pipe 1.2. LtTenTen is Lithuanian web corpus made up of texts collected from the internet in April 2014 BIBREF12. Slovene dataset is formed from the Gigafida 2.0 corpus BIBREF13. It is a general language corpus composed of various sources, mostly newspapers, internet pages, and magazines, but also fiction and non-fiction prose, textbooks, etc. Swedish dataset is composed of STT Swedish articles and Swedish part of CoNLL 2017. The Finnish news agency STT publishes some of its articles in Swedish language. They were made available to us through partnership in a joint project. The corpus contains those articles from 1992 to 2017. Preprocessing and Training Prior to training the ELMo models, we sentence and word tokenized all the datasets. The text was formatted in such a way that each sentence was in its own line with tokens separated by white spaces. CoNLL 2017, DGT-UD and LtTenTen14 corpora were already pre-tokenized. We tokenized the others using the NLTK library and its tokenizers for each of the languages. There is no tokenizer for Croatian in NLTK library, so we used Slovene tokenizer instead. After tokenization, we deduplicated the datasets for each language separately, using the Onion (ONe Instance ONly) tool for text deduplication. We applied the tool on paragraph level for corpora that did not have sentences shuffled and on sentence level for the rest. We considered 9-grams with duplicate content threshold of 0.9. For each language we prepared a vocabulary file, containing roughly one million most common tokens, i.e. tokens that appear at least $n$ times in the corpus, where $n$ is between 15 and 25, depending on the dataset size. We included the punctuation marks among the tokens. We trained each ELMo model using default values used to train the original English ELMo (large) model. Evaluation We evaluated the produced ELMo models for all languages using two evaluation tasks: a word analogy task and named entity recognition (NER) task. Below, we first shortly describe each task, followed by the evaluation results. Evaluation ::: Word Analogy Task The word analogy task was popularized by mikolov2013distributed. The goal is to find a term $y$ for a given term $x$ so that the relationship between $x$ and $y$ best resembles the given relationship $a : b$. There are two main groups of categories: 5 semantic and 10 syntactic. To illustrate a semantic relationship, consider for example that the word pair $a : b$ is given as “Finland : Helsinki”. The task is to find the term $y$ corresponding to the relationship “Sweden : $y$”, with the expected answer being $y=$ Stockholm. In syntactic categories, the two words in a pair have a common stem (in some cases even same lemma), with all the pairs in a given category having the same morphological relationship. For example, given the word pair “long : longer”, we see that we have an adjective in its base form and the same adjective in a comparative form. That task is then to find the term $y$ corresponding to the relationship “dark : $y$”, with the expected answer being $y=$ darker, that is a comparative form of the adjective dark. In the vector space, the analogy task is transformed into vector arithmetic and search for nearest neighbours, i.e. we compute the distance between vectors: d(vec(Finland), vec(Helsinki)) and search for word $y$ which would give the closest result in distance d(vec(Sweden), vec($y$)). In the analogy dataset the analogies are already pre-specified, so we are measuring how close are the given pairs. In the evaluation below, we use analogy datasets for all tested languages based on the English dataset by BIBREF14 . Due to English-centered bias of this dataset, we used a modified dataset which was first written in Slovene language and then translated into other languages BIBREF15. As each instance of analogy contains only four words, without any context, the contextual models (such as ELMo) do not have enough context to generate sensible embeddings. We therefore used some additional text to form simple sentences using the four analogy words, while taking care that their noun case stays the same. For example, for the words "Rome", "Italy", "Paris" and "France" (forming the analogy Rome is to Italy as Paris is to $x$, where the correct answer is $x=$France), we formed the sentence "If the word Rome corresponds to the word Italy, then the word Paris corresponds to the word France". We generated embeddings for those four words in the constructed sentence, substituted the last word with each word in our vocabulary and generated the embeddings again. As typical for non-contextual analogy task, we measure the cosine distance ($d$) between the last word ($w_4$) and the combination of the first three words ($w_2-w_1+w_3$). We use the CSLS metric BIBREF16 to find the closest candidate word ($w_4$). If we find the correct word among the five closest words, we consider that entry as successfully identified. The proportion of correctly identified words forms a statistic called accuracy@5, which we report as the result. We first compare existing Latvian ELMo embeddings from ELMoForManyLangs project with our Latvian embeddings, followed by the detailed analysis of our ELMo embeddings. We trained Latvian ELMo using only CoNLL 2017 corpora. Since this is the only language, where we trained the embedding model on exactly the same corpora as ELMoForManyLangs models, we chose it for comparison between our ELMo model with ELMoForManyLangs. In other languages, additional or other corpora were used, so a direct comparison would also reflect the quality of the corpora used for training. In Latvian, however, only the size of the training dataset is different. ELMoForManyLangs uses only 20 million tokens and we use the whole corpus of 270 million tokens. The Latvian ELMo model from ELMoForManyLangs project performs significantly worse than EMBEDDIA ELMo Latvian model on all categories of word analogy task (Figure FIGREF16). We also include the comparison with our Estonian ELMo embeddings in the same figure. This comparison shows that while differences between our Latvian and Estonian embeddings can be significant for certain categories, the accuracy score of ELMoForManyLangs is always worse than either of our models. The comparison of Estonian and Latvian models leads us to believe that a few hundred million tokens is a sufficiently large corpus to train ELMo models (at least for word analogy task), but 20-million token corpora used in ELMoForManyLangs are too small. The results for all languages and all ELMo layers, averaged over semantic and syntactic categories, are shown in Table TABREF17. The embeddings after the first LSTM layer perform best in semantic categories. In syntactic categories, the non-contextual CNN layer performs the best. Syntactic categories are less context dependent and much more morphology and syntax based, so it is not surprising that the non-contextual layer performs well. The second LSTM layer embeddings perform the worst in syntactic categories, though still outperforming CNN layer embeddings in semantic categories. Latvian ELMo performs worse compared to other languages we trained, especially in semantic categories, presumably due to smaller training data size. Surprisingly, the original English ELMo performs very poorly in syntactic categories and only outperforms Latvian in semantic categories. The low score can be partially explained by English model scoring $0.00$ in one syntactic category “opposite adjective”, which we have not been able to explain. Evaluation ::: Named Entity Recognition For evaluation of ELMo models on a relevant downstream task, we used named entity recognition (NER) task. NER is an information extraction task that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. To allow comparison of results between languages, we used an adapted version of this task, which uses a reduced set of labels, available in NER datasets for all processed languages. The labels in the used NER datasets are simplified to a common label set of three labels (person - PER, location - LOC, organization - ORG). Each word in the NER dataset is labeled with one of the three mentioned labels or a label 'O' (other, i.e. not a named entity) if it does not fit any of the other three labels. The number of words having each label is shown in Table TABREF19. To measure the performance of ELMo embeddings on the NER task we proceeded as follows. We embedded the text in the datasets sentence by sentence, producing three vectors (one from each ELMo layer) for each token in a sentence. We calculated the average of the three vectors and used it as the input of our recognition model. The input layer was followed by a single LSTM layer with 128 LSTM cells and a dropout layer, randomly dropping 10% of the neurons on both the output and the recurrent branch. The final layer of our model was a time distributed softmax layer with 4 neurons. We used ADAM optimiser BIBREF17 with the learning rate 0.01 and $10^{-5}$ learning rate decay. We used categorical cross-entropy as a loss function and trained the model for 3 epochs. We present the results using the Macro $F_1$ score, that is the average of $F_1$-scores for each of the three NE classes (the class Other is excluded). Since the differences between the tested languages depend more on the properties of the NER datasets than on the quality of embeddings, we can not directly compare ELMo models. For this reason, we take the non-contextual fastText embeddings as a baseline and predict named entities using them. The architecture of the model using fastText embeddings is the same as the one using ELMo embeddings, except that the input uses 300 dimensional fastText embedding vectors, and the model was trained for 5 epochs (instead of 3 as for ELMo). In both cases (ELMo and fastText) we trained and evaluated the model five times, because there is some random component involved in initialization of the neural network model. By training and evaluating multiple times, we minimise this random component. The results are presented in Table TABREF21. We included the evaluation of the original ELMo English model in the same table. NER models have little difficulty distinguishing between types of named entities, but recognizing whether a word is a named entity or not is more difficult. For languages with the smallest NER datasets, Croatian and Lithuanian, ELMo embeddings show the largest improvement over fastText embeddings. However, we can observe significant improvements with ELMo also on English and Finnish, which are among the largest datasets (English being by far the largest). Only on Slovenian dataset did ELMo perform slightly worse than fastText, on all other EMBEDDIA languages, the ELMo embeddings improve the results. Conclusion We prepared precomputed ELMo contextual embeddings for seven languages: Croatian, Estonian, Finnish, Latvian, Lithuanian, Slovenian, and Swedish. We present the necessary background on embeddings and contextual embeddings, the details of training the embedding models, and their evaluation. We show that the size of used training sets importantly affects the quality of produced embeddings, and therefore the existing publicly available ELMo embeddings for the processed languages are inadequate. We trained new ELMo embeddings on larger training sets and analysed their properties on the analogy task and on the NER task. The results show that the newly produced contextual embeddings produce substantially better results compared to the non-contextual fastText baseline. In future work, we plan to use the produced contextual embeddings on the problems of news media industry. The pretrained ELMo models will be deposited to the CLARIN repository by the time of the final version of this paper. Acknowledgments The work was partially supported by the Slovenian Research Agency (ARRS) core research programme P6-0411. This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The results of this publication reflects only the authors' view and the EU Commission is not responsible for any use that may be made of the information it contains. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How larger are the training sets of these versions of ELMo compared to the previous ones? Only give me the answer and do not output any other words.
Answer:
[ "By 14 times.", "up to 1.95 times larger" ]
276
4,499
single_doc_qa
c77955af4cd849fd8747741028dca7f1
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Generalized Pole-Residue Method for Dynamic Analysis of Nonlinear Systems based on Volterra Series Publish Date: March 7, 2023 Author List: Qianying Cao (from State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology), Anteng Chang (from College of Engineering, Ocean University of China), Junfeng Du (from College of Engineering, Ocean University of China), Lin Lu (from State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology) Figure Fig. 1: Procedure to compute the response by a combination of Volterra series and Laguerre polynomials Fig. 2: Linear frequency response function: (a) Modulus of H 1 (ω), (b) phase angle of H 1 (ω) Fig. 6: Comparison of h 1 (t) based on the analytical and reconstructed by Laguerre polynomials Fig. 11: Response for Case 1: (a) comparison between the proposed method and Runge-Kutta method, (b) contribution of the three components Fig. 18: Comparison of original excitations and reconstructed results: (a) Case 1, (b) Case 2 (c) Case 3 Fig. 19: Response to irregular excitation for Case 1: (a) comparison between the proposed method and Runge-Kutta method, (b) contribution of the three components Fig. 23: Input-output dataset used to identify Volterra series: (a) input excitation, (b) output response Fig. 26: Comparison of responses between the predicted and numerical results: (a) response to regular excitation, (b) response to irregular excitation Parameter values of the irregular excitation abstract Dynamic systems characterized by second-order nonlinear ordinary differential equations appear in many fields of physics and engineering. To solve these kinds of problems, time-consuming stepby-step numerical integration methods and convolution methods based on Volterra series in the time domain have been widely used. In contrast, this work develops an efficient generalized pole-residue method based on the Volterra series performed in the Laplace domain. The proposed method involves two steps: (1) the Volterra kernels are decoupled in terms of Laguerre polynomials, and (2) the partial response related to a single Laguerre polynomial is obtained analytically in terms of the pole-residue method. Compared to the traditional pole-residue method for a linear system, one of the novelties of the pole-residue method in this paper is how to deal with the higher-order poles and their corresponding coefficients. Because the proposed method derives an explicit, continuous response function of time, it is much more efficient than traditional numerical methods. Unlike the traditional Laplace domain method, the proposed method is applicable to arbitrary irregular excitations. Because the natural response, forced response and cross response are naturally obtained in the solution procedure, meaningful mathematical and physical insights are gained. In numerical studies, systems with a known equation of motion and an unknown equation of motion are investigated. For each system, regular excitations and complex irregular excitations with different parameters are studied. Numerical studies validate the good accuracy and high efficiency of the proposed method by comparing it with the fourth-order Runge-Kutta method. Introduction Most real dynamic systems, as encountered in mechanical and civil engineering, are inherently nonlinear and include geometric nonlinearities, nonlinear constitutive relations in material or nonlinear resistances, etc. . Nonlinear problems are attracting increasing attention from engineers and scientists. This work focuses on solving nonlinear system vibration problems, i.e., computing transient responses of nonlinear oscillators under arbitrary irregular excitations based on a combination of a pole-residue operation and Volterra series. Because Volterra series are single-valued, the scope of the present study is restricted to nonlinear behaviours without bifurcations . To analyse nonlinear vibration problems, researchers have performed extensive studies and developed various mathematical methods. Popular methods include step-by-step numerical integration methods in the time domain, such as the Runge-Kutta method. This kind of method not only requires a small time-step resolution for obtaining high-precision solutions but also is prone to numerical instability . For a long response with small time steps, the time domain methods are very costly in computational time. Volterra series is another widely used method, which is the extension of the Duhamel integral for linear systems . Volterra series can reproduce many nonlinear phenomena, but they are very complex due to higher-dimensional convolution integrals . Since 1980's, significant progress has been made in the general area of the Volterra series. The reader is referred to Ref. for a quite thorough literature review on the relevant topics. After 2017, most papers focus on Volterra series identification. De Paula and Marques proposed a method for the identification of Volterra kernels, which was based on time-delay neural networks. Son and Kim presented a method for a direct estimation of the Volterra kernel coefficients. Dalla Libera et al. introduced two new kernels for Volterra series identification. Peng et al. used the measured response to identify the kernel function and performed the nonlinear structural damage detection. Only a few papers concentrated on simplifying the computation of convolution integrals. Traditional methods for computing convolution integrals involved in the Volterra series have been performed in three distinct domains: time, frequency and Laplace. The time domain method based on Volterra series refers to discrete time convolution methods, which also suffer computational cost problems . Both the frequency domain method and the Laplace domain method based on the Volterra series consist of three steps: (1) Volterra series are transformed into an algebraic equation in the frequency domain or Laplace domain; the algebraic equation is solved by purely algebraic manipulations; and (3) the solution in Step ( ) is transformed back to the time domain. Many researchers have used the frequency domain method to compute the responses of nonlinear systems. Billings et al. developed a new method for identifying the generalized frequency response function (GFRF) of nonlinear systems and then predicted the nonlinear response based on these GFRFs. Carassale et al. introduced a frequency domain approach for nonlinear bridge aerodynamics and aeroelasticity. Ho et al. computed an output frequency domain function of a nonlinear damped duffing system modelled by a Volterra series under a sinusoidal input. Kim et al. identified the higher order frequency response functions by using the nonlinear autoregressive with exogenous input technique and the harmonic probing method. This type of frequency domain method is much more efficient than the time domain method due to the fast Fourier transform algorithm. However, the frequency domain method not only is limited by frequency resolutions but also suffers from leakage problems due to the use of discrete Fourier transforms. In addition, the frequency domain method calculates only a steady-state response. A natural response generated by initial conditions and a cross response caused by interactions between a system and an excitation are ignored. In contrast, the Laplace domain method can calculate all response components because initial conditions are considered in the computational procedure. However, it has been restricted to analytical operations for simple excitations, such as sinusoidal excitations and exponential excitations . The proposed method falls into the category of the Volterra series method computed in the Laplace domain. Unlike the traditional Laplace domain method, the proposed method is applicable to arbitrary irregular excitations. Because the proposed method follows a similar path as a pole-residue method for linear systems , the proposed method to solve nonlinear system vibration problems is called the generalized pole-residue method. The main concept of the pole-residue method developed by Hu et al. was that the poles and residues of the response could be easily obtained from those of the input and system transfer function to obtain the closed-form response solution of linear systems. This method included three steps: (1) writing the system transfer function into pole-residue form; (2) writing the excitation into pole-residue form by the Prony-SS method; (3) computing the poles and residues of the response by an algebraic operation based on those from system and excitation. Compared to Hu et al. , which was regarded as an efficient tool to compute responses of linear systems, the generalized pole-residue method in this paper is introduced to compute responses of nonlinear systems. The proposed method involves two steps: (1) the Volterra kernels are decoupled in terms of Laguerre polynomials, and (2) the partial response related to a single Laguerre polynomial is obtained analytically in terms of the pole-residue method. Compared to the traditional pole-residue method for a linear system, one of the novelties of the generalized pole-residue method is how to deal with the higher-order poles and their corresponding coefficients. Similar to the Taylor series, the Volterra series representation is an infinite series, and convergence conditions are needed to assure that the representation is meaningful. Because the proposed method is based on the Volterra series, only the system with convergent Volterra series representation can be treated by the proposed method. The paper is organized as follows. In Section 2, the nonlinear response is modelled by a Volterra series, and Volterra kernel functions are decoupled by Laguerre polynomials. Then, the pole-residue method for computing explicit responses is developed in Section 3. Numerical studies and discussions are given in Section 4. Finally, the conclusions are drawn in Section 5. Response calculation based on Volterra series A nonlinear oscillator, whose governing equation of motion is given by where z(t, y, ẏ) represents an arbitrary nonlinear term; m, c, and k are the mass, damping and linear stiffness, respectively; y(t), ẏ(t) and ÿ(t) are the displacement, velocity and acceleration, respectively; and f (t) is the time-dependent excitation. If the energy of excitation f (t) is limited, the nonlinear response under zero initial conditions (i.e., zero displacement and zero velocity) can be represented by the Volterra series : where N is the order of Volterra series and In Eq. 3, h 1 (τ ) is called the first-order Volterra kernel function, which represents the linear behaviour of the system; h n (τ 1 , . . . , τ n ) for n > 1 are the higher-order Volterra kernel functions, which describe the nonlinear behaviour of the system. The complete formulation of y(t) includes infinite series where the labour of calculating the n th term increases quickly with the growth of n. Fortunately, the response accuracy may be ensured by the first several order Volterra series. This is proved here in numerical studies. The commonly known Laguerre polynomials are represented as : where p i is the order of the Laguerre polynomials and a i is the damping rate. The Laguerre polynomials satisfy the orthogonal relationship expressed as: By using Laguerre polynomials, the Volterra kernel function h n (t 1 , . . . , t n ) in Eq. 3 can be decoupled as follows : where the coefficient is computed resorting to the orthogonal relationship in Eq. 5: Substituting Eq. 6 into Eq. 3 yields . . . The above operation that uses the Laguerre polynomials to decouple Volterra higher order kernel functions has been well-developed. The reader is referred to Refs. for details about the adopted technique. After decoupling Volterra higher order kernel functions in time, one can regroup Eq. 8 into: . . . By denoting Eq. 9 becomes The above procedure to compute the nonlinear response by a combination of Volterra series and Laguerre polynomials is schematically shown in Fig. . Volterra kernel functions h n (t 1 , . . . , t n ) can be obtained by either an equation of motion or measured input-output signals. To derive a closedform solution of the response, we must obtain a closed-form solution of x i (t) first. In the following presentation, a closed-form solution of the aforementioned x i (t) and y n (t) is derived by using the pole-residue method. 3. Pole-residue method for calculating x i (t) and y n (t) Performing the Laplace transform of x i (t) in Eq. 10 yields where in which Eq. 13 includes a single pole and several higher-order poles. For k = 0, −a i is a single pole, and b p i (0) is a corresponding coefficient, namely, the residue. For k > 0, −a i are higher-order poles, and b p i (k) are corresponding coefficients. For an irregular excitation signal f (t) of a finite duration of T , it can always be approximated into a pole-residue form by using the complex exponential signal decomposition method-Prony-SS : where N ℓ is the number of components; α ℓ and λ ℓ are constant coefficients, which either are real numbers or occur in complex conjugate pairs. We define λ ℓ = −δ ℓ + iΩ ℓ , where Ω ℓ is the excitation frequency and δ ℓ is the damping factor of the ℓ th component. We denote α ℓ = A ℓ e iθ ℓ , where A ℓ is the amplitude and θ ℓ is the sinusoidal initial phase in radians. Taking the Laplace transform of Eq. 15 yields Note that the concept of the Prony-SS method is similar to that of a principal component method. A smooth excitation usually requires just several terms to achieve a good approximation. For high irregular loadings, including more terms would achieve a better approximation. Substituting Eqs. 13 and 16 into Eq. 12 yields Expressing xi (s) in its pole-residue form yields where λ ℓ are simple poles, and the corresponding residues are easily obtained by and −a i are higher-order poles, and the corresponding coefficients are firstly derived as: By taking the inverse Laplace transform of Eq. 18, a closed-form solution is obtained: Substituting Eqs. 11 and 21 into Eq. 2 yields Theoretically speaking, the proposed method for deriving the closed-form solution of the nonlinear response is applicable to any order of the Volterra series. For practical engineering, usually only the first several order responses dominate. By setting up N = 2, Eq. 22 can be simplified into three components: where the natural response, which is only related to system poles, is given by and the cross response, which is related to both system poles and excitation poles, is given by and the forced response, which is related only to excitation poles, is given by The first term in Eq. 26 is the first-order forced response governed by the excitation frequency, i.e., the imaginary part of the pole λ ℓ . The second term corresponds to the second-order nonlinear forced response, which includes the sum frequency and difference frequency responses governed by λ ℓ + λ j . Eq. 26 straightforwardly offers visible information about the possible nonlinear vibrations by the cooperation of excitation frequencies. Particularly, consider a sinusoidal excitation f (t) = sin ω r t, which can be expressed as f (t) = γe λt + γ * e λ * t , where γ = −0.5i and λ = iω r . Substituting these values into Eq. 26, the second term of Eq. 26 is simplified as where the first term is the difference frequency response, and the second term is the sum frequency response. Numerical studies In practical engineering, some systems have an accurate equation of motion. Additionally, some systems have difficulty constructing their equations of motion because of complex nonlinear dynamic behaviours and uncertain system parameters. In this article, a system with a known equation of motion is called a known system, and a system with an unknown equation of motion is called an unknown system for simplicity. In this section, two numerical studies are presented. The first study verifies the proposed method using a known nonlinear oscillator, and the second study demonstrates the applicability of the proposed method to an unknown system. Throughout the numerical studies, the unit system is the metre-kilogramme-second (MKS) system; for conciseness, explicit units for quantities are omitted. A known nonlinear system This study chooses a nonlinear oscillator written as: where mass m = 1, damping c = 1, linear stiffness k 1 = 10, quadratic stiffness k 2 = 20 and cubic stiffness k 3 = 20. It is a case that has been studied in a previously published article . The linear natural frequency of the system ω 0 = k 1 /m = 3.16 and the damping ratio ζ = c/(2mω 0 ) = 15.8%. This kind of oscillator occurs in many engineering problems, such as a model of fluid resonance in a narrow gap between large vessels . In the model, k 1 y represents the linear restoring force of the fluid, and k 2 y 2 and k 3 y 3 are respectively the quadratic and cubic nonlinear restoring forces of the fluid. Volterra kernel functions Generally, the first several order responses dominate the total response of a system. Hence, the order of the Volterra series in Eq. 22 is chosen to be 3, namely, N = 3. For computing the first three order responses from Eq. 22, the first three order Volterra kernel functions need to be known. Since Volterra kernel functions and corresponding frequency response functions are related by a specific Fourier transform pair, we can first write the first three orders of frequency response functions directly from Eq. 28. Then, Volterra kernel functions are obtained by the inverse Fourier transform. Based on the harmonic probing algorithm , the linear frequency response function (LFRF) H 1 (ω), the quadratic frequency response function (QFRF) H 2 (ω 1 , ω 2 ) and the cubic frequency response function (CFRF) H 3 (ω 1 , ω 2 , ω 3 ) are analytically given by: and Figures show H 1 (ω), H 2 (ω 1 , ω 2 ) and H 3 (ω 1 , ω 2 , ω 3 ), respectively, which agree well with those reported in Ref. . As expected, the modulus of H 1 (ω) in Fig. peaks near the linear natural frequency ω 0 , and the phase angle decreases monotonically from 0 to -π with increasing frequency. Figure shows the sum frequency QFRF, where the energy converges along the line of ω 1 +ω 2 ≈ ω 0 . Therefore, when the sum frequency of a two-tone excitation equals the linear resonant frequency, the second-order response may reach its maximum. Additionally, those pairs of excitations in line ω 1 + ω 2 ≈ ω 0 may produce non-negligible vibration magnitudes due to second-order nonlinear effects. For the difference frequency QFRF in Fig. (b), the energy converges along two main lines, i.e., ω 1 ≈ ω 0 and ω 2 ≈ ω 0 . Figures show moduli of H 3 (ω, ω, ω) and H 3 (ω, ω, −ω), which are diagonal terms of the sum frequency CFRF and the difference frequency CFRF, respectively. While the modulus of H 3 (ω, ω, ω) peaks near ω ≈ ω 0 /3 and ω 0 , that of H 3 (ω, ω, −ω) peaks near ω ≈ ω 0 with a small hump around ω ≈ ω 0 /2. Values at ω ≈ ω 0 /3 and ω 0 /2 may be magnified by higher-order stiffness terms in Eq. 28. By performing the inverse fast Fourier transform to Eqs. 29-31, the corresponding linear impulse response function h 1 (t), quadratic impulse response function h 2 (t 1 , t 2 ) and cubic impulse response function h 3 (t 1 , t 2 , t 3 ) are obtained. Here, h 1 (t) and h 2 (t 1 , t 2 ) are plotted in Figs. , respectively, and h 3 (t, t, t) is shown in Fig. . In the numerical implementation, Eqs. 29-31 have been utilized with the frequency interval ∆ω = 0.1, number of frequency components N n = 1025, and cut-off frequencies 102.4 and −102.4. For decoupling Volterra kernel functions by using Laguerre polynomials, the damping rate and number of Laguerre polynomials for each order Volterra kernel function need to be determined (see Eqs. 4 and 6). In this example, we set a 1 = a 2 = a 3 = 2 and R 1 = R 2 = R 3 = 24 because coefficients c p 1 ...pn become very small when R n > 24, n = 1, 2, 3. According to Eq. 7, the coefficients of the first three order Volterra kernel functions are calculated, which are shown in Figs. 9 and 10. For convenience, Fig. plots only c p 1 p 2 p 3 for p 3 = 0. With the increase of the order of Laguerre polynomials, coefficients in Figs. 9 and 10 gradually decrease, which illustrates how the first several orders of Laguerre polynomials dominate all orders of the Volterra kernel function. With the known Laguerre polynomials and corresponding coefficients, Volterra kernel functions are reconstructed by Eq. 6. For comparison, reconstructed Volterra kernel functions are also plotted in Figs. . The reconstructed results agree well with the analytical values, which verifies the accuracy of the decomposition. Sinusoidal excitation From Eq. 28, we consider a sinusoidal excitation where A and Ω are the amplitude and the frequency, respectively. Five cases of A and Ω are shown in Table . Excitation frequencies in Cases 1 and 2 are larger than the linear natural frequency (ω 0 ≈ 3.16), those in Case 3 are very close to ω 0 , and those in Cases 4 and 5 are smaller than ω 0 . All cases have same amplitudes. The poles of a sinusoidal excitation are λ 1,2 = ±iΩ, and the residues are α 1,2 = ∓iA/2. Numerical values of excitation poles and residues for different cases are listed in Table . Table : Parameter values, poles and residues of the sinusoidal excitation Substituting poles and residues of the excitation, as well as those of the system into Eqs. 20 and 19, response coefficients β p i ,k corresponding to system poles −a i and response coefficients γ p i ,ℓ corresponding to excitation poles λ ℓ are calculated, respectively. According to Eq. 22, the first three orders of responses for each case in Table are calculated. Figures )-15(a) show the comparison of responses obtained by the proposed method and the fourth-order Runge-Kutta method with ∆t = 10 −4 . For Cases 1 and 2, the first-order responses agree well with the total responses obtained by the Runge-Kutta method, and the higher-order responses only slightly improve the transient parts. For Cases 3-5, the sum of the first three orders of responses is in good agreement with the Runge-Kutta solution. When the response nonlinearity increases, higher-order responses need to be considered. In other words, the proposed method can accurately compute the nonlinear responses by choosing a small number N of Volterra series terms. Figures )-15(b) show the contributions of the three response components for the five cases. In each case, the first-order response is the most dominant component, and the contributions of secondand third-order responses are much less than those of the first-order response. Especially for Cases 1 and 2, whose excitation frequencies are far from the linear natural frequency, second-and thirdorder responses are close to zero. This may be because the QFRF and CFRF approach zero when the frequency is larger than 4 rad/s (see Figs. ). Furthermore, the mean values of the first-order responses are approximately zero, and those of the second-order responses are always smaller than zero, which are the difference frequency components in Eq. 27. Moreover, it is clearly observed that second-order responses for Cases 3-5 exhibit a periodic oscillation with a period near half of that for the first-order response, which is excited by the sum frequency component of the excitation (see second part of Eq. 27). Compared with steady-state solutions of first-and second-order responses, those of third-order responses in Cases 3-5 are no longer single regular motions. By performing the FFT, frequency spectra of these three third-order responses are shown in Fig. . We find that these three third-order responses are all dominated by their own fundamental harmonic component and the third harmonic (triple frequency) component. Figure shows the computational time to calculate the response of the oscillator for Case 1 by the proposed method, the fourth-order Runge-Kutta method and the convolution method. The proposed method, which has an explicit solution, is much more efficient in computational time than the latter two methods, which need small time steps to obtain high-precision solutions. In particular, the efficiency of the proposed method increases with the length of the response time. Computation time (sec.) t=0.02s Convolution, t=0.02s Convolution, t=0.001s Runge-Kutta, t=0.001s Fig. : Comparison of computation efficiency of the proposed method, the fourth-fifth order Runge-Kutta method and the convolution method regular loading in Case 1 Irregular excitation In Eq. 28, considering an irregular excitation consisting of several cosine functions where N f is the number of cosine components; A n , Ω n and θ n are the amplitude, frequency and phase angle of the n th component, respectively. Table lists three cases of these parameters. In each case, the amplitudes of all components are the same, and phase angles θ n uniformly distributed between 0 and 2π are randomly generated. To decompose the excitation into a pole-residue form, the Prony-SS method is used, whose concept is similar to that of a principal component method. The readers are referred to Ref. for details. The chosen rank of each case is also shown in Table . Figure shows the comparison of original excitations and reconstructed results of these three cases, which all have excellent agreement. show the results computed by the fourth-order Runge-Kutta method. In all cases, the sums of the first three orders of responses agree well with those obtained by the Runge-Kutta method. The contributions of the first three orders of responses for each case are plotted in Figs. )-21(b). Similarly, the system vibration is dominated by the first-order response. However, the contributions of second-and third-order significantly grow with increasing excitation magnitude and frequency number. Furthermore, when the magnitude of the nonlinear response becomes large, sharp troughs are present. This phenomenon may be induced by the nonlinear stiffness. While the first-order response fails to capture these troughs, the higher-order responses successfully capture these troughs. Figure plots the computational time to calculate the response of the oscillator for the irregular loading in Case 1 by the proposed method and the fourth-fifth order Runge-Kutta method, respectively. While the fourth-fifth order Runge-Kutta method is more efficient under a small response length, the proposed method becomes much more efficient when the response length is larger than about 130 s. In addition, the proposed method obtains the explicit response solution, so one can directly obtain the response value at a specific time t p instead of integrating from 0 to t p for traditional numerical methods. Computation time (sec.) Proposed, t=0.02s Runge-Kutta, t=0.001s Fig. : Comparison of computation efficiency of the method and the fourth-fifth order Runge-Kutta method for irregular loading in Case 1 An unknown nonlinear To check the applicability of the proposed method to an unknown nonlinear system, a known input excitation and its corresponding response are used to identify its Volterra kernel functions. When the Volterra kernel functions are known, we can follow the procedure in Section 4.1 to predict system responses. In this study, the input excitation is white noise with a constant power spectrum S 0 = 0.001, and the corresponding response is obtained by solving Eq. 28 by the fourth-order Runge-Kutta method, which is shown in Fig. . From Section 4.1, we determine that the sum of the first two orders of responses agrees well with the total response. In this study, the order of Volterra series N is chosen to be 2, damping rates of Laguerre polynomials are a 1 = a 2 = 2, and numbers of Laguerre polynomials are R 1 = R 2 = 24. To estimate the first two orders of Volterra kernel functions, a matrix equation is constructed using excitation data and response data. By using the least square method to solve this matrix equation, coefficients c p 1 and c p 1 p 2 in Eq. 8 are identified. Figure plots c p 1 and c p 1 p 2 , respectively, which have good agreement with the exact results in Fig. . Then, the first two order Volterra kernel functions are constructed by Eq. 6. Compared with the exact results in Figs. , the identified Volterra kernel functions in Fig. completely agree well with the exact solutions. Note that the white noise excitation, which can excite more frequency components of the response, is chosen to obtain good Volterra kernel functions. A regular excitation f (t) = sin(πt) and an irregular excitation f (t) = N f n=1 A n cos(Ω n t + θ n ) with A n = 0.3 and Ω n varying from 0 to 40 with equal interval 1 are chosen as input excitations. The predicted responses, along with results obtained by the fourth-order Runge-Kutta method, are shown in Fig. . In both cases, the proposed method accurately predicts system responses. As presented in Eq. 23, a nonlinear response is the sum of three terms: natural response y s (t), forced response y f (t) and cross response y c (t). These individual terms, as well as their sum to two excitations, are shown in Figs. 27 and 28, respectively. As shown in Figs. and 28, both first-and second-order responses include the natural response y s (t) and the forced response y f (t), but the cross response y c (t) only exists in second-order responses. When t becomes larger, both y s (t) and y c (t) diminish due to the presence of system damping, and the total response is entirely governed by y f (t). Moreover, we notice some features at t = 0 for these components, including y s (0) = −y f (0) for the first-order response and y s (0) + y f (0) = −y c (0) for the second-order response, which are due to imposed zero initial conditions. Conclusions Considering arbitrary irregular excitations, an efficient generalized pole-residue method to compute the nonlinear dynamic response modelled by the Volterra series was developed. A core of the proposed method was obtaining poles and corresponding coefficients of Volterra kernel functions, then those of each order response modelled by each order Volterra series. Once the poles and corresponding coefficients of Volterra kernel functions and excitations were both available, the remaining derivation could follow a similar pole-residue method that had been developed for ordinary linear oscillators. To obtain the poles and corresponding coefficients of Volterra kernel functions, two steps were included: (1) using Laguerre polynomials to decouple higher-order Volterra kernel functions with respect to time and (2) obtaining poles and corresponding coefficients of Laguerre polynomials in the Laplace domain. Because the proposed method gave an explicit, continuous response function of time, it was much more efficient than traditional numerical methods. Moreover, many meaningful physical and mathematical insights were gained because not only each order response but also the natural response, the forced response and the cross response of each order were obtained in the solution procedure. To demonstrate that the proposed method was not only suitable for a system with a known equation of motion but also applicable to a system with an unknown equation of motion, two numerical studies were conducted. For each study, regular excitations and complex irregular excitations with different parameters were investigated. The efficiency of the proposed method was verified by the fourth-order Runge-Kutta method. This paper only computes the response under zero initial conditions. The response under non-zero initial conditions will be investigated in our future work. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What does the paper aim to solve? Only give me the answer and do not output any other words.
Answer:
[ "The paper aims to solve nonlinear system vibration problems efficiently." ]
276
6,952
single_doc_qa
569d79514c294f24b26cc0395d567a59
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction During the first two decades of the 21st century, the sharing and processing of vast amounts of data has become pervasive. This expansion of data sharing and processing capabilities is both a blessing and a curse. Data helps build better information systems for the digital era and enables further research for advanced data management that benefits the society in general. But the use of this very data containing sensitive information conflicts with private data protection, both from an ethical and a legal perspective. There are several application domains on which this situation is particularly acute. This is the case of the medical domain BIBREF0. There are plenty of potential applications for advanced medical data management that can only be researched and developed using real data; yet, the use of medical data is severely limited –when not entirely prohibited– due to data privacy protection policies. One way of circumventing this problem is to anonymise the data by removing, replacing or obfuscating the personal information mentioned, as exemplified in Table TABREF1. This task can be done by hand, having people read and anonymise the documents one by one. Despite being a reliable and simple solution, this approach is tedious, expensive, time consuming and difficult to scale to the potentially thousands or millions of documents that need to be anonymised. For this reason, numerous of systems and approaches have been developed during the last decades to attempt to automate the anonymisation of sensitive content, starting with the automatic detection and classification of sensitive information. Some of these systems rely on rules, patterns and dictionaries, while others use more advanced techniques related to machine learning and, more recently, deep learning. Given that this paper is concerned with text documents (e.g. medical records), the involved techniques are related to Natural Language Processing (NLP). When using NLP approaches, it is common to pose the problem of document anonymisation as a sequence labelling problem, i.e. classifying each token within a sequence as being sensitive information or not. Further, depending on the objective of the anonymisation task, it is also important to determine the type of sensitive information (names of individuals, addresses, age, sex, etc.). The anonymisation systems based on NLP techniques perform reasonably well, but are far from perfect. Depending on the difficulty posed by each dataset or the amount of available data for training machine learning models, the performance achieved by these methods is not enough to fully rely on them in certain situations BIBREF0. However, in the last two years, the NLP community has reached an important milestone thanks to the appearance of the so-called Transformers neural network architectures BIBREF1. In this paper, we conduct several experiments in sensitive information detection and classification on Spanish clinical text using BERT (from `Bidirectional Encoder Representations from Transformers') BIBREF2 as the base for a sequence labelling approach. The experiments are carried out on two datasets: the MEDDOCAN: Medical Document Anonymization shared task dataset BIBREF3, and NUBes BIBREF4, a corpus of real medical reports in Spanish. In these experiments, we compare the performance of BERT with other machine-learning-based systems, some of which use language-specific features. Our aim is to evaluate how good a BERT-based model performs without language nor domain specialisation apart from the training data labelled for the task at hand. The rest of the paper is structured as follows: the next section describes related work about data anonymisation in general and clinical data anonymisation in particular; it also provides a more detailed explanation and background about the Transformers architecture and BERT. Section SECREF3 describes the data involved in the experiments and the systems evaluated in this paper, including the BERT-based system; finally, it details the experimental design. Section SECREF4 introduces the results for each set of experiments. Finally, Section SECREF5 contains the conclusions and future lines of work. Related Work The state of the art in the field of Natural Language Processing (NLP) has reached an important milestone in the last couple of years thanks to deep-learning architectures, increasing in several points the performance of new models for almost any text processing task. The major change started with the Transformers model proposed by vaswani2017attention. It substituted the widely used recurrent and convolutional neural network architectures by another approach based solely on self-attention, obtaining an impressive performance gain. The original proposal was focused on an encoder-decoder architecture for machine translation, but soon the use of Transformers was made more general BIBREF1. There are several other popular models that use Transformers, such as Open AI's GPT and GPT2 BIBREF5, RoBERTa BIBREF6 and the most recent XLNet BIBREF7; still, BERT BIBREF2 is one of the most widespread Transformer-based models. BERT trains its unsupervised language model using a Masked Language Model and Next Sentence Prediction. A common problem in NLP is the lack of enough training data. BERT can be pre-trained to learn general or specific language models using very large amounts of unlabelled text (e.g. web content, Wikipedia, etc.), and this knowledge can be transferred to a different downstream task in a process that receives the name fine-tuning. devlin2018bert have used fine-tuning to achieve state-of-the-art results on a wide variety of challenging natural language tasks, such as text classification, Question Answering (QA) and Named Entity Recognition and Classification (NERC). BERT has also been used successfully by other community practitioners for a wide range of NLP-related tasks BIBREF8, BIBREF9. Regarding the task of data anonymisation in particular, anonymisation systems may follow different approaches and pursue different objectives (Cormode and Srivastava, 2009). The first objective of these systems is to detect and classify the sensitive information contained in the documents to be anonymised. In order to achieve that, they use rule-based approaches, Machine Learning (ML) approaches, or a combination of both. Although most of these efforts are for English texts –see, among others, the i2b2 de-identification challenges BIBREF10, BIBREF11, dernon2016deep, or khin2018deep–, other languages are also attracting growing interest. Some examples are mamede2016automated for Portuguese and tveit2004anonymization for Norwegian. With respect to the anonymisation of text written in Spanish, recent studies include medina2018building, hassan2018anonimizacion and garcia2018automating. Most notably, in 2019 the first community challenge about anonymisation of medical documents in Spanish, MEDDOCAN BIBREF3, was held as part of the IberLEF initiative. The winners of the challenge –the Neither-Language-nor-Domain-Experts (NLNDE) BIBREF12– achieved F1-scores as high as 0.975 in the task of sensitive information detection and categorisation by using recurrent neural networks with Conditional Random Field (CRF) output layers. At the same challenge, mao2019hadoken occupied the 8th position among 18 participants using BERT. According to the description of the system, the authors used BERT-Base Multilingual Cased and an output CRF layer. However, their system is $\sim $3 F1-score points below our implementation without the CRF layer. Materials and Methods The aim of this paper is to evaluate BERT's multilingual model and compare it to other established machine-learning algorithms in a specific task: sensitive data detection and classification in Spanish clinical free text. This section describes the data involved in the experiments and the systems evaluated. Finally, we introduce the experimental setup. Materials and Methods ::: Data Two datasets are exploited in this article. Both datasets consist of plain text containing clinical narrative written in Spanish, and their respective manual annotations of sensitive information in BRAT BIBREF13 standoff format. In order to feed the data to the different algorithms presented in Section SECREF7, these datasets were transformed to comply with the commonly used BIO sequence representation scheme BIBREF14. Materials and Methods ::: Data ::: NUBes-PHI NUBes BIBREF4 is a corpus of around 7,000 real medical reports written in Spanish and annotated with negation and uncertainty information. Before being published, sensitive information had to be manually annotated and replaced for the corpus to be safely shared. In this article, we work with the NUBes version prior to its anonymisation, that is, with the manual annotations of sensitive information. It follows that the version we work with is not publicly available and, due to contractual restrictions, we cannot reveal the provenance of the data. In order to avoid confusion between the two corpus versions, we henceforth refer to the version relevant in this paper as NUBes-PHI (from `NUBes with Personal Health Information'). NUBes-PHI consists of 32,055 sentences annotated for 11 different sensitive information categories. Overall, it contains 7,818 annotations. The corpus has been randomly split into train (72%), development (8%) and test (20%) sets to conduct the experiments described in this paper. The size of each split and the distribution of the annotations can be consulted in Tables and , respectively. The majority of sensitive information in NUBes-PHI are temporal expressions (`Date' and `Time'), followed by healthcare facility mentions (`Hospital'), and the age of the patient. Mentions of people are not that frequent, with physician names (`Doctor') occurring much more often than patient names (`Patient'). The least frequent sensitive information types, which account for $\sim $10% of the remaining annotations, consist of the patient's sex, job, and kinship, and locations other than healthcare facilities (`Location'). Finally, the tag `Other' includes, for instance, mentions to institutions unrelated to healthcare and whether the patient is right- or left-handed. It occurs just 36 times. Materials and Methods ::: Data ::: The MEDDOCAN corpus The organisers of the MEDDOCAN shared task BIBREF3 curated a synthetic corpus of clinical cases enriched with sensitive information by health documentalists. In this regard, the MEDDOCAN evaluation scenario could be said to be somewhat far from the real use case the technology developed for the shared task is supposed to be applied in. However, at the moment it also provides the only public means for a rigorous comparison between systems for sensitive health information detection in Spanish texts. The size of the MEDDOCAN corpus is shown in Table . Compared to NUBes-PHI (Table ), this corpus contains more sensitive information annotations, both in absolute and relative terms. The sensitive annotation categories considered in MEDDOCAN differ in part from those in NUBes-PHI. Most notably, it contains finer-grained labels for location-related mentions –namely, `Address', `Territory', and `Country'–, and other sensitive information categories that we did not encounter in NUBes-PHI (e.g., identifiers, phone numbers, e-mail addresses, etc.). In total, the MEDDOCAN corpus has 21 sensitive information categories. We refer the reader to the organisers' article BIBREF3 for more detailed information about this corpus. Materials and Methods ::: Systems Apart from experimenting with a pre-trained BERT model, we have run experiments with other systems and baselines, to compare them and obtain a better perspective about BERT's performance in these datasets. Materials and Methods ::: Systems ::: Baseline As the simplest baseline, a sensitive data recogniser and classifier has been developed that consists of regular-expressions and dictionary look-ups. For each category to detect a specific method has been implemented. For instance, the Date, Age, Time and Doctor detectors are based on regular-expressions; Hospital, Sex, Kinship, Location, Patient and Job are looked up in dictionaries. The dictionaries are hand-crafted from the training data available, except for the Patient's case, for which the possible candidates considered are the 100 most common female and male names in Spain according to the Instituto Nacional de Estadística (INE; Spanish Statistical Office). Materials and Methods ::: Systems ::: CRF Conditional Random Fields (CRF) BIBREF15 have been extensively used for tasks of sequential nature. In this paper, we propose as one of the competitive baselines a CRF classifier trained with sklearn-crfsuite for Python 3.5 and the following configuration: algorithm = lbfgs; maximum iterations = 100; c1 = c2 = 0.1; all transitions = true; optimise = false. The features extracted from each token are as follows: [noitemsep] prefixes and suffixes of 2 and 3 characters; the length of the token in characters and the length of the sentence in tokens; whether the token is all-letters, a number, or a sequence of punctuation marks; whether the token contains the character `@'; whether the token is the start or end of the sentence; the token's casing and the ratio of uppercase characters, digits, and punctuation marks to its length; and, the lemma, part-of-speech tag, and named-entity tag given by ixa-pipes BIBREF16 upon analysing the sentence the token belongs to. Noticeably, none of the features used to train the CRF classifier is domain-dependent. However, the latter group of features is language dependent. Materials and Methods ::: Systems ::: spaCy spaCy is a widely used NLP library that implements state-of-the-art text processing pipelines, including a sequence-labelling pipeline similar to the one described by strubell2017fast. spaCy offers several pre-trained models in Spanish, which perform basic NLP tasks such as Named Entity Recognition (NER). In this paper, we have trained a new NER model to detect NUBes-PHI labels. For this purpose, the new model uses all the labels of the training corpus coded with its context at sentence level. The network optimisation parameters and dropout values are the ones recommended in the documentation for small datasets. Finally, the model is trained using batches of size 64. No more features are included, so the classifier is language-dependent but not domain-dependent. Materials and Methods ::: Systems ::: BERT As introduced earlier, BERT has shown an outstanding performance in NERC-like tasks, improving the start-of-the-art results for almost every dataset and language. We take the same approach here, by using the model BERT-Base Multilingual Cased with a Fully Connected (FC) layer on top to perform a fine-tuning of the whole model for an anonymisation task in Spanish clinical data. Our implementation is built on PyTorch and the PyTorch-Transformers library BIBREF1. The training phase consists in the following steps (roughly depicted in Figure ): Pre-processing: since we are relying on a pre-trained BERT model, we must match the same configuration by using a specific tokenisation and vocabulary. BERT also needs that the inputs contains special tokens to signal the beginning and the end of each sequence. Fine-tuning: the pre-processed sequence is fed into the model. BERT outputs the contextual embeddings that encode each of the inputted tokens. This embedding representation for each token is fed into the FC linear layer after a dropout layer (with a 0.1 dropout probability), which in turn outputs the logits for each possible class. The cross-entropy loss function is calculated comparing the logits and the gold labels, and the error is back-propagated to adjust the model parameters. We have trained the model using an AdamW optimiser BIBREF17 with the learning rate set to 3e-5, as recommended by devlin2018bert, and with a gradient clipping of 1.0. We also applied a learning-rate scheduler that warms up the learning rate from zero to its maximum value as the training progresses, which is also a common practice. For each experiment set proposed below, the training was run with an early-stopping patience of 15 epochs. Then, the model that performed best against the development set was used to produce the reported results. The experiments were run on a 64-core server with operating system Ubuntu 16.04, 250GB of RAM memory, and 4 GeForce RTX 2080 GPUs with 11GB of memory. The maximum sequence length was set at 500 and the batch size at 12. In this setting, each epoch –a full pass through all the training data– required about 10 minutes to complete. Materials and Methods ::: Experimental design We have conducted experiments with BERT in the two datasets of Spanish clinical narrative presented in Section SECREF3 The first experiment set uses NUBes-PHI, a corpus of real medical reports manually annotated with sensitive information. Because this corpus is not publicly available, and in order to compare the BERT-based model to other related published systems, the second set of experiments uses the MEDDOCAN 2019 shared task competition dataset. The following sections provide greater detail about the two experimental setups. Materials and Methods ::: Experimental design ::: Experiment A: NUBes-PHI In this experiment set, we evaluate all the systems presented in Section SECREF7, namely, the rule-based baseline, the CRF classifier, the spaCy entity tagger, and BERT. The evaluation comprises three scenarios of increasing difficulty: [noitemsep] - Evaluates the performance of the systems at predicting whether each token is sensitive or non-sensitive; that is, the measurements only take into account whether a sensitive token has been recognised or not, regardless of the BIO label and the category assigned. This scenario shows how good a system would be at obfuscating sensitive data (e.g., by replacing sensitive tokens with asterisks). - We measure the performance of the systems at predicting the sensitive information type of each token –i.e., the 11 categories presented in Section SECREF5 or `out'. Detecting entity types correctly is important if a system is going to be used to replace sensitive data by fake data of the same type (e.g., random people names). - This is the strictest evaluation, as it takes into account both the BIO label and the category assigned to each individual token. Being able to discern between two contiguous sensitive entities of the same type is relevant not only because it is helpful when producing fake replacements, but because it also yields more accurate statistics of the sensitive information present in a given document collection. The systems are evaluated in terms of micro-average precision, recall and F1-score in all the scenarios. In addition to the scenarios proposed, a subject worth being studied is the need of labelled data. Manually labelled data is an scarce and expensive resource, which for some application domains or languages is difficult to come by. In order to obtain an estimation of the dependency of each system on the available amount of training data, we have retrained all the compared models using decreasing amounts of data –from 100% of the available training instances to just 1%. The same data subsets have been used to train all the systems. Due to the knowledge transferred from the pre-trained BERT model, the BERT-based model is expected to be more robust to data scarcity than those that start their training from scratch. Materials and Methods ::: Experimental design ::: Experiment B: MEDDOCAN In this experiment set, our BERT implementation is compared to several systems that participated in the MEDDOCAN challenge: a CRF classifier BIBREF18, a spaCy entity recogniser BIBREF18, and NLNDE BIBREF12, the winner of the shared task and current state of the art for sensitive information detection and classification in Spanish clinical text. Specifically, we include the results of a domain-independent NLNDE model (S2), and the results of a model enriched with domain-specific embeddings (S3). Finally, we include the results obtained by mao2019hadoken with a CRF output layer on top of BERT embeddings. MEDDOCAN consists of two scenarios: [noitemsep] - This evaluation measures how good a system is at detecting sensitive text spans, regardless of the category assigned to them. - In this scenario, systems are required to match exactly not only the boundaries of each sensitive span, but also the category assigned. The systems are evaluated in terms of micro-averaged precision, recall and F-1 score. Note that, in contrast to the evaluation in Experiment A, MEDDOCAN measurements are entity-based instead of tokenwise. An exhaustive explanation of the MEDDOCAN evaluation procedure is available online, as well as the official evaluation script, which we used to obtain the reported results. Results This section describes the results obtained in the two sets of experiments: NUBes-PHI and MEDDOCAN. Results ::: Experiment A: NUBes-PHI Table shows the results of the conducted experiments in NUBes-PHI for all the compared systems. The included baseline serves to give a quick insight about how challenging the data is. With simple regular expressions and gazetteers a precision of 0.853 is obtained. On the other hand, the recall, which directly depends on the coverage provided by the rules and resources, drops to 0.469. Hence, this task is unlikely to be solved without the generalisation capabilities provided by machine-learning and deep-learning models. Regarding the detection scenario –that is, the scenario concerned with a binary classification to determine whether each individual token conveys sensitive information or not–, it can be observed that BERT outperforms its competitors. A fact worth highlighting is that, according to these results, BERT achieves a precision lower than the rest of the systems (i.e., it makes more false positive predictions); in exchange, it obtains a remarkably higher recall. Noticeably, it reaches a recall of 0.979, improving by more than 4 points the second-best system, spaCy. The table also shows the results for the relaxed metric that only takes into account the entity type detected, regardless of the BIO label (i.e., ignoring whether the token is at the beginning or in the middle of a sensitive sequence of tokens). The conclusions are very similar to those extracted previously, with BERT gaining 2.1 points of F1-score over the CRF based approach. The confusion matrices of the predictions made by CRF, spaCy, and BERT in this scenario are shown in Table . As can bee seen, BERT has less difficulty in predicting correctly less frequent categories, such as `Location', `Job', and `Patient'. One of the most common mistakes according to the confusion matrices is classifying hospital names as `Location' instead of the more accurate `Hospital'; this is hardly a harmful error, given that a hospital is actually a location. Last, the category `Other' is completely leaked by all the compared systems, most likely due to its almost total lack of support in both training and evaluation datasets. To finish with this experiment set, Table also shows the strict classification precision, recall and F1-score for the compared systems. Despite the fact that, in general, the systems obtain high values, BERT outperforms them again. BERT's F1-score is 1.9 points higher than the next most competitive result in the comparison. More remarkably, the recall obtained by BERT is about 5 points above. Upon manual inspection of the errors committed by the BERT-based model, we discovered that it has a slight tendency towards producing ill-formed BIO sequences (e.g, starting a sensitive span with `Inside' instead of `Begin'; see Table ). We could expect that complementing the BERT-based model with a CRF layer on top would help enforce the emission of valid sequences, alleviating this kind of errors and further improving its results. Finally, Figure shows the impact of decreasing the amount of training data in the detection scenario. It shows the difference in precision, recall, and F1-score with respect to that obtained using 100% of the training data. A general downward trend can be observed, as one would expect: less training data leads to less accurate predictions. However, the BERT-based model is the most robust to training-data reduction, showing an steadily low performance loss. With 1% of the dataset (230 training instances), the BERT-based model only suffers a striking 7-point F1-score loss, in contrast to the 32 and 39 points lost by the CRF and spaCy models, respectively. This steep performance drop stems to a larger extent from recall decline, which is not that marked in the case of BERT. Overall, these results indicate that the transfer-learning achieved through the BERT multilingual pre-trained model not only helps obtain better results, but also lowers the need of manually labelled data for this application domain. Results ::: Experiment B: MEDDOCAN The results of the two MEDDOCAN scenarios –detection and classification– are shown in Table . These results follow the same pattern as in the previous experiments, with the CRF classifier being the most precise of all, and BERT outperforming both the CRF and spaCy classifiers thanks to its greater recall. We also show the results of mao2019hadoken who, despite of having used a BERT-based system, achieve lower scores than our models. The reason why it should be so remain unclear. With regard to the winner of the MEDDOCAN shared task, the BERT-based model has not improved the scores obtained by neither the domain-dependent (S3) nor the domain-independent (S2) NLNDE model. However, attending to the obtained results, BERT remains only 0.3 F1-score points behind, and would have achieved the second position among all the MEDDOCAN shared task competitors. Taking into account that only 3% of the gold labels remain incorrectly annotated, the task can be considered almost solved, and it is not clear if the differences among the systems are actually significant, or whether they stem from minor variations in initialisation or a long-tail of minor labelling inconsistencies. Conclusions and Future Work In this work we have briefly introduced the problems related to data privacy protection in clinical domain. We have also described some of the groundbreaking advances on the Natural Language Processing field due to the appearance of Transformers-based deep-learning architectures and transfer learning from very large general-domain multilingual corpora, focusing our attention in one of its most representative examples, Google's BERT model. In order to assess the performance of BERT for Spanish clinical data anonymisation, we have conducted several experiments with a BERT-based sequence labelling approach using the pre-trained multilingual BERT model shared by Google as the starting point for the model training. We have compared this BERT-based sequence labelling against other methods and systems. One of the experiments uses the MEDDOCAN 2019 shared task dataset, while the other uses a novel Spanish clinical reports dataset called NUBes-PHI. The results of the experiments show that, in NUBes-PHI, the BERT-based model outperforms the other systems without requiring any adaptation or domain-specific feature engineering, just by being trained on the provided labelled data. Interestingly, the BERT-based model obtains a remarkably higher recall than the other systems. High recall is a desirable outcome because, when anonymising sensible documents, the accidental leak of sensible data is likely to be more dangerous than the unintended over-obfuscation of non-sensitive text. Further, we have conducted an additional experiment on this dataset by progressively reducing the training data for all the compared systems. The BERT-based model shows the highest robustness to training-data scarcity, loosing only 7 points of F1-score when trained on 230 instances instead of 21,371. These observation are in line with the results obtained by the NLP community using BERT for other tasks. The experiments with the MEDDOCAN 2019 shared task dataset follow the same pattern. In this case, the BERT-based model falls 0.3 F1-score points behind the shared task winning system, but it would have achieved the second position in the competition with no further refinement. Since we have used a pre-trained multilingual BERT model, the same approach is likely to work for other languages just by providing some labelled training data. Further, this is the simplest fine-tuning that can be performed based on BERT. More sophisticated fine-tuning layers could help improve the results. For example, it could be expected that a CRF layer helped enforce better BIO tagging sequence predictions. Precisely, mao2019hadoken participated in the MEDDOCAN competition using a BERT+CRF architecture, but their reported scores are about 3 points lower than our implementation. From the description of their work, it is unclear what the source of this score difference could be. Further, at the time of writing this paper, new multilingual pre-trained models and Transformer architectures have become available. It would not come as a surprise that these new resources and systems –e.g., XLM-RoBERTa BIBREF19 or BETO BIBREF20, a BERT model fully pre-trained on Spanish texts– further advanced the state of the art in this task. Acknowledgements This work has been supported by Vicomtech and partially funded by the project DeepReading (RTI2018-096846-B-C21, MCIU/AEI/FEDER,UE). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the clinical datasets used in the paper? Only give me the answer and do not output any other words.
Answer:
[ "MEDDOCAN, NUBes-PHI", "MEDDOCAN, NUBes " ]
276
6,018
single_doc_qa
0892548c28564f47801067aae3f46512
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The Transformer architecture BIBREF0 for deep neural networks has quickly risen to prominence in NLP through its efficiency and performance, leading to improvements in the state of the art of Neural Machine Translation BIBREF1, BIBREF2, as well as inspiring other powerful general-purpose models like BERT BIBREF3 and GPT-2 BIBREF4. At the heart of the Transformer lie multi-head attention mechanisms: each word is represented by multiple different weighted averages of its relevant context. As suggested by recent works on interpreting attention head roles, separate attention heads may learn to look for various relationships between tokens BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. The attention distribution of each head is predicted typically using the softmax normalizing transform. As a result, all context words have non-zero attention weight. Recent work on single attention architectures suggest that using sparse normalizing transforms in attention mechanisms such as sparsemax – which can yield exactly zero probabilities for irrelevant words – may improve performance and interpretability BIBREF12, BIBREF13, BIBREF14. Qualitative analysis of attention heads BIBREF0 suggests that, depending on what phenomena they capture, heads tend to favor flatter or more peaked distributions. Recent works have proposed sparse Transformers BIBREF10 and adaptive span Transformers BIBREF11. However, the “sparsity" of those models only limits the attention to a contiguous span of past tokens, while in this work we propose a highly adaptive Transformer model that is capable of attending to a sparse set of words that are not necessarily contiguous. Figure FIGREF1 shows the relationship of these methods with ours. Our contributions are the following: We introduce sparse attention into the Transformer architecture, showing that it eases interpretability and leads to slight accuracy gains. We propose an adaptive version of sparse attention, where the shape of each attention head is learnable and can vary continuously and dynamically between the dense limit case of softmax and the sparse, piecewise-linear sparsemax case. We make an extensive analysis of the added interpretability of these models, identifying both crisper examples of attention head behavior observed in previous work, as well as novel behaviors unraveled thanks to the sparsity and adaptivity of our proposed model. Background ::: The Transformer In NMT, the Transformer BIBREF0 is a sequence-to-sequence (seq2seq) model which maps an input sequence to an output sequence through hierarchical multi-head attention mechanisms, yielding a dynamic, context-dependent strategy for propagating information within and across sentences. It contrasts with previous seq2seq models, which usually rely either on costly gated recurrent operations BIBREF15, BIBREF16 or static convolutions BIBREF17. Given $n$ query contexts and $m$ sequence items under consideration, attention mechanisms compute, for each query, a weighted representation of the items. The particular attention mechanism used in BIBREF0 is called scaled dot-product attention, and it is computed in the following way: where $\mathbf {Q} \in \mathbb {R}^{n \times d}$ contains representations of the queries, $\mathbf {K}, \mathbf {V} \in \mathbb {R}^{m \times d}$ are the keys and values of the items attended over, and $d$ is the dimensionality of these representations. The $\mathbf {\pi }$ mapping normalizes row-wise using softmax, $\mathbf {\pi }(\mathbf {Z})_{ij} = \operatornamewithlimits{\mathsf {softmax}}(\mathbf {z}_i)_j$, where In words, the keys are used to compute a relevance score between each item and query. Then, normalized attention weights are computed using softmax, and these are used to weight the values of each item at each query context. However, for complex tasks, different parts of a sequence may be relevant in different ways, motivating multi-head attention in Transformers. This is simply the application of Equation DISPLAY_FORM7 in parallel $H$ times, each with a different, learned linear transformation that allows specialization: In the Transformer, there are three separate multi-head attention mechanisms for distinct purposes: Encoder self-attention: builds rich, layered representations of each input word, by attending on the entire input sentence. Context attention: selects a representative weighted average of the encodings of the input words, at each time step of the decoder. Decoder self-attention: attends over the partial output sentence fragment produced so far. Together, these mechanisms enable the contextualized flow of information between the input sentence and the sequential decoder. Background ::: Sparse Attention The softmax mapping (Equation DISPLAY_FORM8) is elementwise proportional to $\exp $, therefore it can never assign a weight of exactly zero. Thus, unnecessary items are still taken into consideration to some extent. Since its output sums to one, this invariably means less weight is assigned to the relevant items, potentially harming performance and interpretability BIBREF18. This has motivated a line of research on learning networks with sparse mappings BIBREF19, BIBREF20, BIBREF21, BIBREF22. We focus on a recently-introduced flexible family of transformations, $\alpha $-entmax BIBREF23, BIBREF14, defined as: where $\triangle ^d \lbrace \mathbf {p}\in \mathbb {R}^d:\sum _{i} p_i = 1\rbrace $ is the probability simplex, and, for $\alpha \ge 1$, $\mathsf {H}^{\textsc {T}}_\alpha $ is the Tsallis continuous family of entropies BIBREF24: This family contains the well-known Shannon and Gini entropies, corresponding to the cases $\alpha =1$ and $\alpha =2$, respectively. Equation DISPLAY_FORM14 involves a convex optimization subproblem. Using the definition of $\mathsf {H}^{\textsc {T}}_\alpha $, the optimality conditions may be used to derive the following form for the solution (Appendix SECREF83): where $[\cdot ]_+$ is the positive part (ReLU) function, $\mathbf {1}$ denotes the vector of all ones, and $\tau $ – which acts like a threshold – is the Lagrange multiplier corresponding to the $\sum _i p_i=1$ constraint. Background ::: Sparse Attention ::: Properties of @!START@$\alpha $@!END@-entmax. The appeal of $\alpha $-entmax for attention rests on the following properties. For $\alpha =1$ (i.e., when $\mathsf {H}^{\textsc {T}}_\alpha $ becomes the Shannon entropy), it exactly recovers the softmax mapping (We provide a short derivation in Appendix SECREF89.). For all $\alpha >1$ it permits sparse solutions, in stark contrast to softmax. In particular, for $\alpha =2$, it recovers the sparsemax mapping BIBREF19, which is piecewise linear. In-between, as $\alpha $ increases, the mapping continuously gets sparser as its curvature changes. To compute the value of $\alpha $-entmax, one must find the threshold $\tau $ such that the r.h.s. in Equation DISPLAY_FORM16 sums to one. BIBREF23 propose a general bisection algorithm. BIBREF14 introduce a faster, exact algorithm for $\alpha =1.5$, and enable using $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with fixed $\alpha $ within a neural network by showing that the $\alpha $-entmax Jacobian w.r.t. $\mathbf {z}$ for $\mathbf {p}^\star = \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ is Our work furthers the study of $\alpha $-entmax by providing a derivation of the Jacobian w.r.t. the hyper-parameter $\alpha $ (Section SECREF3), thereby allowing the shape and sparsity of the mapping to be learned automatically. This is particularly appealing in the context of multi-head attention mechanisms, where we shall show in Section SECREF35 that different heads tend to learn different sparsity behaviors. Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax We now propose a novel Transformer architecture wherein we simply replace softmax with $\alpha $-entmax in the attention heads. Concretely, we replace the row normalization $\mathbf {\pi }$ in Equation DISPLAY_FORM7 by This change leads to sparse attention weights, as long as $\alpha >1$; in particular, $\alpha =1.5$ is a sensible starting point BIBREF14. Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Different @!START@$\alpha $@!END@ per head. Unlike LSTM-based seq2seq models, where $\alpha $ can be more easily tuned by grid search, in a Transformer, there are many attention heads in multiple layers. Crucial to the power of such models, the different heads capture different linguistic phenomena, some of them isolating important words, others spreading out attention across phrases BIBREF0. This motivates using different, adaptive $\alpha $ values for each attention head, such that some heads may learn to be sparser, and others may become closer to softmax. We propose doing so by treating the $\alpha $ values as neural network parameters, optimized via stochastic gradients along with the other weights. Adaptively Sparse Transformers with @!START@$\alpha $@!END@-entmax ::: Derivatives w.r.t. @!START@$\alpha $@!END@. In order to optimize $\alpha $ automatically via gradient methods, we must compute the Jacobian of the entmax output w.r.t. $\alpha $. Since entmax is defined through an optimization problem, this is non-trivial and cannot be simply handled through automatic differentiation; it falls within the domain of argmin differentiation, an active research topic in optimization BIBREF25, BIBREF26. One of our key contributions is the derivation of a closed-form expression for this Jacobian. The next proposition provides such an expression, enabling entmax layers with adaptive $\alpha $. To the best of our knowledge, ours is the first neural network module that can automatically, continuously vary in shape away from softmax and toward sparse mappings like sparsemax. Proposition 1 Let $\mathbf {p}^\star \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ be the solution of Equation DISPLAY_FORM14. Denote the distribution $\tilde{p}_i {(p_i^\star )^{2 - \alpha }}{ \sum _j(p_j^\star )^{2-\alpha }}$ and let $h_i -p^\star _i \log p^\star _i$. The $i$th component of the Jacobian $\mathbf {g} \frac{\partial \mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})}{\partial \alpha }$ is proof uses implicit function differentiation and is given in Appendix SECREF10. Proposition UNKREF22 provides the remaining missing piece needed for training adaptively sparse Transformers. In the following section, we evaluate this strategy on neural machine translation, and analyze the behavior of the learned attention heads. Experiments We apply our adaptively sparse Transformers on four machine translation tasks. For comparison, a natural baseline is the standard Transformer architecture using the softmax transform in its multi-head attention mechanisms. We consider two other model variants in our experiments that make use of different normalizing transformations: 1.5-entmax: a Transformer with sparse entmax attention with fixed $\alpha =1.5$ for all heads. This is a novel model, since 1.5-entmax had only been proposed for RNN-based NMT models BIBREF14, but never in Transformers, where attention modules are not just one single component of the seq2seq model but rather an integral part of all of the model components. $\alpha $-entmax: an adaptive Transformer with sparse entmax attention with a different, learned $\alpha _{i,j}^t$ for each head. The adaptive model has an additional scalar parameter per attention head per layer for each of the three attention mechanisms (encoder self-attention, context attention, and decoder self-attention), i.e., and we set $\alpha _{i,j}^t = 1 + \operatornamewithlimits{\mathsf {sigmoid}}(a_{i,j}^t) \in ]1, 2[$. All or some of the $\alpha $ values can be tied if desired, but we keep them independent for analysis purposes. Experiments ::: Datasets. Our models were trained on 4 machine translation datasets of different training sizes: [itemsep=.5ex,leftmargin=2ex] IWSLT 2017 German $\rightarrow $ English BIBREF27: 200K sentence pairs. KFTT Japanese $\rightarrow $ English BIBREF28: 300K sentence pairs. WMT 2016 Romanian $\rightarrow $ English BIBREF29: 600K sentence pairs. WMT 2014 English $\rightarrow $ German BIBREF30: 4.5M sentence pairs. All of these datasets were preprocessed with byte-pair encoding BIBREF31, using joint segmentations of 32k merge operations. Experiments ::: Training. We follow the dimensions of the Transformer-Base model of BIBREF0: The number of layers is $L=6$ and number of heads is $H=8$ in the encoder self-attention, the context attention, and the decoder self-attention. We use a mini-batch size of 8192 tokens and warm up the learning rate linearly until 20k steps, after which it decays according to an inverse square root schedule. All models were trained until convergence of validation accuracy, and evaluation was done at each 10k steps for ro$\rightarrow $en and en$\rightarrow $de and at each 5k steps for de$\rightarrow $en and ja$\rightarrow $en. The end-to-end computational overhead of our methods, when compared to standard softmax, is relatively small; in training tokens per second, the models using $\alpha $-entmax and $1.5$-entmax are, respectively, $75\%$ and $90\%$ the speed of the softmax model. Experiments ::: Results. We report test set tokenized BLEU BIBREF32 results in Table TABREF27. We can see that replacing softmax by entmax does not hurt performance in any of the datasets; indeed, sparse attention Transformers tend to have slightly higher BLEU, but their sparsity leads to a better potential for analysis. In the next section, we make use of this potential by exploring the learned internal mechanics of the self-attention heads. Analysis We conduct an analysis for the higher-resource dataset WMT 2014 English $\rightarrow $ German of the attention in the sparse adaptive Transformer model ($\alpha $-entmax) at multiple levels: we analyze high-level statistics as well as individual head behavior. Moreover, we make a qualitative analysis of the interpretability capabilities of our models. Analysis ::: High-Level Statistics ::: What kind of @!START@$\alpha $@!END@ values are learned? Figure FIGREF37 shows the learning trajectories of the $\alpha $ parameters of a selected subset of heads. We generally observe a tendency for the randomly-initialized $\alpha $ parameters to decrease initially, suggesting that softmax-like behavior may be preferable while the model is still very uncertain. After around one thousand steps, some heads change direction and become sparser, perhaps as they become more confident and specialized. This shows that the initialization of $\alpha $ does not predetermine its sparsity level or the role the head will have throughout. In particular, head 8 in the encoder self-attention layer 2 first drops to around $\alpha =1.3$ before becoming one of the sparsest heads, with $\alpha \approx 2$. The overall distribution of $\alpha $ values at convergence can be seen in Figure FIGREF38. We can observe that the encoder self-attention blocks learn to concentrate the $\alpha $ values in two modes: a very sparse one around $\alpha \rightarrow 2$, and a dense one between softmax and 1.5-entmax . However, the decoder self and context attention only learn to distribute these parameters in a single mode. We show next that this is reflected in the average density of attention weight vectors as well. Analysis ::: High-Level Statistics ::: Attention weight density when translating. For any $\alpha >1$, it would still be possible for the weight matrices in Equation DISPLAY_FORM9 to learn re-scalings so as to make attention sparser or denser. To visualize the impact of adaptive $\alpha $ values, we compare the empirical attention weight density (the average number of tokens receiving non-zero attention) within each module, against sparse Transformers with fixed $\alpha =1.5$. Figure FIGREF40 shows that, with fixed $\alpha =1.5$, heads tend to be sparse and similarly-distributed in all three attention modules. With learned $\alpha $, there are two notable changes: (i) a prominent mode corresponding to fully dense probabilities, showing that our models learn to combine sparse and dense attention, and (ii) a distinction between the encoder self-attention – whose background distribution tends toward extreme sparsity – and the other two modules, who exhibit more uniform background distributions. This suggests that perhaps entirely sparse Transformers are suboptimal. The fact that the decoder seems to prefer denser attention distributions might be attributed to it being auto-regressive, only having access to past tokens and not the full sentence. We speculate that it might lose too much information if it assigned weights of zero to too many tokens in the self-attention, since there are fewer tokens to attend to in the first place. Teasing this down into separate layers, Figure FIGREF41 shows the average (sorted) density of each head for each layer. We observe that $\alpha $-entmax is able to learn different sparsity patterns at each layer, leading to more variance in individual head behavior, to clearly-identified dense and sparse heads, and overall to different tendencies compared to the fixed case of $\alpha =1.5$. Analysis ::: High-Level Statistics ::: Head diversity. To measure the overall disagreement between attention heads, as a measure of head diversity, we use the following generalization of the Jensen-Shannon divergence: where $\mathbf {p}_j$ is the vector of attention weights assigned by head $j$ to each word in the sequence, and $\mathsf {H}^\textsc {S}$ is the Shannon entropy, base-adjusted based on the dimension of $\mathbf {p}$ such that $JS \le 1$. We average this measure over the entire validation set. The higher this metric is, the more the heads are taking different roles in the model. Figure FIGREF44 shows that both sparse Transformer variants show more diversity than the traditional softmax one. Interestingly, diversity seems to peak in the middle layers of the encoder self-attention and context attention, while this is not the case for the decoder self-attention. The statistics shown in this section can be found for the other language pairs in Appendix SECREF8. Analysis ::: Identifying Head Specializations Previous work pointed out some specific roles played by different heads in the softmax Transformer model BIBREF33, BIBREF5, BIBREF9. Identifying the specialization of a head can be done by observing the type of tokens or sequences that the head often assigns most of its attention weight; this is facilitated by sparsity. Analysis ::: Identifying Head Specializations ::: Positional heads. One particular type of head, as noted by BIBREF9, is the positional head. These heads tend to focus their attention on either the previous or next token in the sequence, thus obtaining representations of the neighborhood of the current time step. In Figure FIGREF47, we show attention plots for such heads, found for each of the studied models. The sparsity of our models allows these heads to be more confident in their representations, by assigning the whole probability distribution to a single token in the sequence. Concretely, we may measure a positional head's confidence as the average attention weight assigned to the previous token. The softmax model has three heads for position $-1$, with median confidence $93.5\%$. The $1.5$-entmax model also has three heads for this position, with median confidence $94.4\%$. The adaptive model has four heads, with median confidences $95.9\%$, the lowest-confidence head being dense with $\alpha =1.18$, while the highest-confidence head being sparse ($\alpha =1.91$). For position $+1$, the models each dedicate one head, with confidence around $95\%$, slightly higher for entmax. The adaptive model sets $\alpha =1.96$ for this head. Analysis ::: Identifying Head Specializations ::: BPE-merging head. Due to the sparsity of our models, we are able to identify other head specializations, easily identifying which heads should be further analysed. In Figure FIGREF51 we show one such head where the $\alpha $ value is particularly high (in the encoder, layer 1, head 4 depicted in Figure FIGREF37). We found that this head most often looks at the current time step with high confidence, making it a positional head with offset 0. However, this head often spreads weight sparsely over 2-3 neighboring tokens, when the tokens are part of the same BPE cluster or hyphenated words. As this head is in the first layer, it provides a useful service to the higher layers by combining information evenly within some BPE clusters. For each BPE cluster or cluster of hyphenated words, we computed a score between 0 and 1 that corresponds to the maximum attention mass assigned by any token to the rest of the tokens inside the cluster in order to quantify the BPE-merging capabilities of these heads. There are not any attention heads in the softmax model that are able to obtain a score over $80\%$, while for $1.5$-entmax and $\alpha $-entmax there are two heads in each ($83.3\%$ and $85.6\%$ for $1.5$-entmax and $88.5\%$ and $89.8\%$ for $\alpha $-entmax). Analysis ::: Identifying Head Specializations ::: Interrogation head. On the other hand, in Figure FIGREF52 we show a head for which our adaptively sparse model chose an $\alpha $ close to 1, making it closer to softmax (also shown in encoder, layer 1, head 3 depicted in Figure FIGREF37). We observe that this head assigns a high probability to question marks at the end of the sentence in time steps where the current token is interrogative, thus making it an interrogation-detecting head. We also observe this type of heads in the other models, which we also depict in Figure FIGREF52. The average attention weight placed on the question mark when the current token is an interrogative word is $98.5\%$ for softmax, $97.0\%$ for $1.5$-entmax, and $99.5\%$ for $\alpha $-entmax. Furthermore, we can examine sentences where some tendentially sparse heads become less so, thus identifying sources of ambiguity where the head is less confident in its prediction. An example is shown in Figure FIGREF55 where sparsity in the same head differs for sentences of similar length. Related Work ::: Sparse attention. Prior work has developed sparse attention mechanisms, including applications to NMT BIBREF19, BIBREF12, BIBREF20, BIBREF22, BIBREF34. BIBREF14 introduced the entmax function this work builds upon. In their work, there is a single attention mechanism which is controlled by a fixed $\alpha $. In contrast, this is the first work to allow such attention mappings to dynamically adapt their curvature and sparsity, by automatically adjusting the continuous $\alpha $ parameter. We also provide the first results using sparse attention in a Transformer model. Related Work ::: Fixed sparsity patterns. Recent research improves the scalability of Transformer-like networks through static, fixed sparsity patterns BIBREF10, BIBREF35. Our adaptively-sparse Transformer can dynamically select a sparsity pattern that finds relevant words regardless of their position (e.g., Figure FIGREF52). Moreover, the two strategies could be combined. In a concurrent line of research, BIBREF11 propose an adaptive attention span for Transformer language models. While their work has each head learn a different contiguous span of context tokens to attend to, our work finds different sparsity patterns in the same span. Interestingly, some of their findings mirror ours – we found that attention heads in the last layers tend to be denser on average when compared to the ones in the first layers, while their work has found that lower layers tend to have a shorter attention span compared to higher layers. Related Work ::: Transformer interpretability. The original Transformer paper BIBREF0 shows attention visualizations, from which some speculation can be made of the roles the several attention heads have. BIBREF7 study the syntactic abilities of the Transformer self-attention, while BIBREF6 extract dependency relations from the attention weights. BIBREF8 find that the self-attentions in BERT BIBREF3 follow a sequence of processes that resembles a classical NLP pipeline. Regarding redundancy of heads, BIBREF9 develop a method that is able to prune heads of the multi-head attention module and make an empirical study of the role that each head has in self-attention (positional, syntactic and rare words). BIBREF36 also aim to reduce head redundancy by adding a regularization term to the loss that maximizes head disagreement and obtain improved results. While not considering Transformer attentions, BIBREF18 show that traditional attention mechanisms do not necessarily improve interpretability since softmax attention is vulnerable to an adversarial attack leading to wildly different model predictions for the same attention weights. Sparse attention may mitigate these issues; however, our work focuses mostly on a more mechanical aspect of interpretation by analyzing head behavior, rather than on explanations for predictions. Conclusion and Future Work We contribute a novel strategy for adaptively sparse attention, and, in particular, for adaptively sparse Transformers. We present the first empirical analysis of Transformers with sparse attention mappings (i.e., entmax), showing potential in both translation accuracy as well as in model interpretability. In particular, we analyzed how the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence. Our adaptivity strategy relies only on gradient-based optimization, side-stepping costly per-head hyper-parameter searches. Further speed-ups are possible by leveraging more parallelism in the bisection algorithm for computing $\alpha $-entmax. Finally, some of the automatically-learned behaviors of our adaptively sparse Transformers – for instance, the near-deterministic positional heads or the subword joining head – may provide new ideas for designing static variations of the Transformer. Acknowledgments This work was supported by the European Research Council (ERC StG DeepSPIN 758969), and by the Fundação para a Ciência e Tecnologia through contracts UID/EEA/50008/2019 and CMUPERI/TIC/0046/2014 (GoLocal). We are grateful to Ben Peters for the $\alpha $-entmax code and Erick Fonseca, Marcos Treviso, Pedro Martins, and Tsvetomila Mihaylova for insightful group discussion. We thank Mathieu Blondel for the idea to learn $\alpha $. We would also like to thank the anonymous reviewers for their helpful feedback. Supplementary Material Background ::: Regularized Fenchel-Young prediction functions Definition 1 (BIBREF23) Let $\Omega \colon \triangle ^d \rightarrow {\mathbb {R}}\cup \lbrace \infty \rbrace $ be a strictly convex regularization function. We define the prediction function $\mathbf {\pi }_{\Omega }$ as Background ::: Characterizing the @!START@$\alpha $@!END@-entmax mapping Lemma 1 (BIBREF14) For any $\mathbf {z}$, there exists a unique $\tau ^\star $ such that Proof: From the definition of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$, we may easily identify it with a regularized prediction function (Def. UNKREF81): We first note that for all $\mathbf {p}\in \triangle ^d$, From the constant invariance and scaling properties of $\mathbf {\pi }_{\Omega }$ BIBREF23, Using BIBREF23, noting that $g^{\prime }(t) = t^{\alpha - 1}$ and $(g^{\prime })^{-1}(u) = u^{{1}{\alpha -1}}$, yields Since $\mathsf {H}^{\textsc {T}}_\alpha $ is strictly convex on the simplex, $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ has a unique solution $\mathbf {p}^\star $. Equation DISPLAY_FORM88 implicitly defines a one-to-one mapping between $\mathbf {p}^\star $ and $\tau ^\star $ as long as $\mathbf {p}^\star \in \triangle $, therefore $\tau ^\star $ is also unique. Background ::: Connections to softmax and sparsemax The Euclidean projection onto the simplex, sometimes referred to, in the context of neural attention, as sparsemax BIBREF19, is defined as The solution can be characterized through the unique threshold $\tau $ such that $\sum _i \operatornamewithlimits{\mathsf {sparsemax}}(\mathbf {z})_i = 1$ and BIBREF38 Thus, each coordinate of the sparsemax solution is a piecewise-linear function. Visibly, this expression is recovered when setting $\alpha =2$ in the $\alpha $-entmax expression (Equation DISPLAY_FORM85); for other values of $\alpha $, the exponent induces curvature. On the other hand, the well-known softmax is usually defined through the expression which can be shown to be the unique solution of the optimization problem where $\mathsf {H}^\textsc {S}(\mathbf {p}) -\sum _i p_i \log p_i$ is the Shannon entropy. Indeed, setting the gradient to 0 yields the condition $\log p_i = z_j - \nu _i - \tau - 1$, where $\tau $ and $\nu > 0$ are Lagrange multipliers for the simplex constraints $\sum _i p_i = 1$ and $p_i \ge 0$, respectively. Since the l.h.s. is only finite for $p_i>0$, we must have $\nu _i=0$ for all $i$, by complementary slackness. Thus, the solution must have the form $p_i = {\exp (z_i)}{Z}$, yielding Equation DISPLAY_FORM92. Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ Recall that the entmax transformation is defined as: where $\alpha \ge 1$ and $\mathsf {H}^{\textsc {T}}_{\alpha }$ is the Tsallis entropy, and $\mathsf {H}^\textsc {S}(\mathbf {p}):= -\sum _j p_j \log p_j$ is the Shannon entropy. In this section, we derive the Jacobian of $\operatornamewithlimits{\mathsf {entmax }}$ with respect to the scalar parameter $\alpha $. Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: General case of @!START@$\alpha >1$@!END@ From the KKT conditions associated with the optimization problem in Eq. DISPLAY_FORM85, we have that the solution $\mathbf {p}^{\star }$ has the following form, coordinate-wise: where $\tau ^{\star }$ is a scalar Lagrange multiplier that ensures that $\mathbf {p}^{\star }$ normalizes to 1, i.e., it is defined implicitly by the condition: For general values of $\alpha $, Eq. DISPLAY_FORM98 lacks a closed form solution. This makes the computation of the Jacobian non-trivial. Fortunately, we can use the technique of implicit differentiation to obtain this Jacobian. The Jacobian exists almost everywhere, and the expressions we derive expressions yield a generalized Jacobian BIBREF37 at any non-differentiable points that may occur for certain ($\alpha $, $\mathbf {z}$) pairs. We begin by noting that $\frac{\partial p_i^{\star }}{\partial \alpha } = 0$ if $p_i^{\star } = 0$, because increasing $\alpha $ keeps sparse coordinates sparse. Therefore we need to worry only about coordinates that are in the support of $\mathbf {p}^\star $. We will assume hereafter that the $i$th coordinate of $\mathbf {p}^\star $ is non-zero. We have: We can see that this Jacobian depends on $\frac{\partial \tau ^{\star }}{\partial \alpha }$, which we now compute using implicit differentiation. Let $\mathcal {S} = \lbrace i: p^\star _i > 0 \rbrace $). By differentiating both sides of Eq. DISPLAY_FORM98, re-using some of the steps in Eq. DISPLAY_FORM101, and recalling Eq. DISPLAY_FORM97, we get from which we obtain: Finally, plugging Eq. DISPLAY_FORM103 into Eq. DISPLAY_FORM101, we get: where we denote by The distribution $\tilde{\mathbf {p}}(\alpha )$ can be interpreted as a “skewed” distribution obtained from $\mathbf {p}^{\star }$, which appears in the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}(\mathbf {z})$ w.r.t. $\mathbf {z}$ as well BIBREF14. Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Solving the indetermination for @!START@$\alpha =1$@!END@ We can write Eq. DISPLAY_FORM104 as When $\alpha \rightarrow 1^+$, we have $\tilde{\mathbf {p}}(\alpha ) \rightarrow \mathbf {p}^{\star }$, which leads to a $\frac{0}{0}$ indetermination. To solve this indetermination, we will need to apply L'Hôpital's rule twice. Let us first compute the derivative of $\tilde{p}_i(\alpha )$ with respect to $\alpha $. We have therefore Differentiating the numerator and denominator in Eq. DISPLAY_FORM107, we get: with and When $\alpha \rightarrow 1^+$, $B$ becomes again a $\frac{0}{0}$ indetermination, which we can solve by applying again L'Hôpital's rule. Differentiating the numerator and denominator in Eq. DISPLAY_FORM112: Finally, summing Eq. DISPLAY_FORM111 and Eq. DISPLAY_FORM113, we get Jacobian of @!START@$\alpha $@!END@-entmax w.r.t. the shape parameter @!START@$\alpha $@!END@: Proof of Proposition @!START@UID22@!END@ ::: Summary To sum up, we have the following expression for the Jacobian of $\mathop {\mathsf {\alpha }\textnormal {-}\mathsf {entmax }}$ with respect to $\alpha $: Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How does their model improve interpretability compared to softmax transformers? Only give me the answer and do not output any other words.
Answer:
[ "the attention heads in the proposed adaptively sparse Transformer can specialize more and with higher confidence", "We introduce sparse attention into the Transformer architecture" ]
276
7,650
single_doc_qa
8ba170eae28b43499bb8b27577b852da
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Understanding the emotions expressed in a text or message is of high relevance nowadays. Companies are interested in this to get an understanding of the sentiment of their current customers regarding their products and the sentiment of their potential customers to attract new ones. Moreover, changes in a product or a company may also affect the sentiment of a customer. However, the intensity of an emotion is crucial in determining the urgency and importance of that sentiment. If someone is only slightly happy about a product, is a customer willing to buy it again? Conversely, if someone is very angry about customer service, his or her complaint might be given priority over somewhat milder complaints. BIBREF0 present four tasks in which systems have to automatically determine the intensity of emotions (EI) or the intensity of the sentiment (Valence) of tweets in the languages English, Arabic, and Spanish. The goal is to either predict a continuous regression (reg) value or to do ordinal classification (oc) based on a number of predefined categories. The EI tasks have separate training sets for four different emotions: anger, fear, joy and sadness. Due to the large number of subtasks and the fact that this language does not have many resources readily available, we only focus on the Spanish subtasks. Our work makes the following contributions: Our submissions ranked second (EI-Reg), second (EI-Oc), fourth (V-Reg) and fifth (V-Oc), demonstrating that the proposed method is accurate in automatically determining the intensity of emotions and sentiment of Spanish tweets. This paper will first focus on the datasets, the data generation procedure, and the techniques and tools used. Then we present the results in detail, after which we perform a small error analysis on the largest mistakes our model made. We conclude with some possible ideas for future work. Data For each task, the training data that was made available by the organizers is used, which is a selection of tweets with for each tweet a label describing the intensity of the emotion or sentiment BIBREF1 . Links and usernames were replaced by the general tokens URL and @username, after which the tweets were tokenized by using TweetTokenizer. All text was lowercased. In a post-processing step, it was ensured that each emoji is tokenized as a single token. Word Embeddings To be able to train word embeddings, Spanish tweets were scraped between November 8, 2017 and January 12, 2018. We chose to create our own embeddings instead of using pre-trained embeddings, because this way the embeddings would resemble the provided data set: both are based on Twitter data. Added to this set was the Affect in Tweets Distant Supervision Corpus (DISC) made available by the organizers BIBREF0 and a set of 4.1 million tweets from 2015, obtained from BIBREF2 . After removing duplicate tweets and tweets with fewer than ten tokens, this resulted in a set of 58.7 million tweets, containing 1.1 billion tokens. The tweets were preprocessed using the method described in Section SECREF6 . The word embeddings were created using word2vec in the gensim library BIBREF3 , using CBOW, a window size of 40 and a minimum count of 5. The feature vectors for each tweet were then created by using the AffectiveTweets WEKA package BIBREF4 . Translating Lexicons Most lexical resources for sentiment analysis are in English. To still be able to benefit from these sources, the lexicons in the AffectiveTweets package were translated to Spanish, using the machine translation platform Apertium BIBREF5 . All lexicons from the AffectiveTweets package were translated, except for SentiStrength. Instead of translating this lexicon, the English version was replaced by the Spanish variant made available by BIBREF6 . For each subtask, the optimal combination of lexicons was determined. This was done by first calculating the benefits of adding each lexicon individually, after which only beneficial lexicons were added until the score did not increase anymore (e.g. after adding the best four lexicons the fifth one did not help anymore, so only four were added). The tests were performed using a default SVM model, with the set of word embeddings described in the previous section. Each subtask thus uses a different set of lexicons (see Table TABREF1 for an overview of the lexicons used in our final ensemble). For each subtask, this resulted in a (modest) increase on the development set, between 0.01 and 0.05. Translating Data The training set provided by BIBREF0 is not very large, so it was interesting to find a way to augment the training set. A possible method is to simply translate the datasets into other languages, leaving the labels intact. Since the present study focuses on Spanish tweets, all tweets from the English datasets were translated into Spanish. This new set of “Spanish” data was then added to our original training set. Again, the machine translation platform Apertium BIBREF5 was used for the translation of the datasets. Algorithms Used Three types of models were used in our system, a feed-forward neural network, an LSTM network and an SVM regressor. The neural nets were inspired by the work of Prayas BIBREF7 in the previous shared task. Different regression algorithms (e.g. AdaBoost, XGBoost) were also tried due to the success of SeerNet BIBREF8 , but our study was not able to reproduce their results for Spanish. For both the LSTM network and the feed-forward network, a parameter search was done for the number of layers, the number of nodes and dropout used. This was done for each subtask, i.e. different tasks can have a different number of layers. All models were implemented using Keras BIBREF9 . After the best parameter settings were found, the results of 10 system runs to produce our predictions were averaged (note that this is different from averaging our different type of models in Section SECREF16 ). For the SVM (implemented in scikit-learn BIBREF10 ), the RBF kernel was used and a parameter search was conducted for epsilon. Detailed parameter settings for each subtask are shown in Table TABREF12 . Each parameter search was performed using 10-fold cross validation, as to not overfit on the development set. Semi-supervised Learning One of the aims of this study was to see if using semi-supervised learning is beneficial for emotion intensity tasks. For this purpose, the DISC BIBREF0 corpus was used. This corpus was created by querying certain emotion-related words, which makes it very suitable as a semi-supervised corpus. However, the specific emotion the tweet belonged to was not made public. Therefore, a method was applied to automatically assign the tweets to an emotion by comparing our scraped tweets to this new data set. First, in an attempt to obtain the query-terms, we selected the 100 words which occurred most frequently in the DISC corpus, in comparison with their frequencies in our own scraped tweets corpus. Words that were clearly not indicators of emotion were removed. The rest was annotated per emotion or removed if it was unclear to which emotion the word belonged. This allowed us to create silver datasets per emotion, assigning tweets to an emotion if an annotated emotion-word occurred in the tweet. Our semi-supervised approach is quite straightforward: first a model is trained on the training set and then this model is used to predict the labels of the silver data. This silver data is then simply added to our training set, after which the model is retrained. However, an extra step is applied to ensure that the silver data is of reasonable quality. Instead of training a single model initially, ten different models were trained which predict the labels of the silver instances. If the highest and lowest prediction do not differ more than a certain threshold the silver instance is maintained, otherwise it is discarded. This results in two parameters that could be optimized: the threshold and the number of silver instances that would be added. This method can be applied to both the LSTM and feed-forward networks that were used. An overview of the characteristics of our data set with the final parameter settings is shown in Table TABREF14 . Usually, only a small subset of data was added to our training set, meaning that most of the silver data is not used in the experiments. Note that since only the emotions were annotated, this method is only applicable to the EI tasks. Ensembling To boost performance, the SVM, LSTM, and feed-forward models were combined into an ensemble. For both the LSTM and feed-forward approach, three different models were trained. The first model was trained on the training data (regular), the second model was trained on both the training and translated training data (translated) and the third one was trained on both the training data and the semi-supervised data (silver). Due to the nature of the SVM algorithm, semi-supervised learning does not help, so only the regular and translated model were trained in this case. This results in 8 different models per subtask. Note that for the valence tasks no silver training data was obtained, meaning that for those tasks the semi-supervised models could not be used. Per task, the LSTM and feed-forward model's predictions were averaged over 10 prediction runs. Subsequently, the predictions of all individual models were combined into an average. Finally, models were removed from the ensemble in a stepwise manner if the removal increased the average score. This was done based on their original scores, i.e. starting out by trying to remove the worst individual model and working our way up to the best model. We only consider it an increase in score if the difference is larger than 0.002 (i.e. the difference between 0.716 and 0.718). If at some point the score does not increase and we are therefore unable to remove a model, the process is stopped and our best ensemble of models has been found. This process uses the scores on the development set of different combinations of models. Note that this means that the ensembles for different subtasks can contain different sets of models. The final model selections can be found in Table TABREF17 . Results and Discussion Table TABREF18 shows the results on the development set of all individuals models, distinguishing the three types of training: regular (r), translated (t) and semi-supervised (s). In Tables TABREF17 and TABREF18 , the letter behind each model (e.g. SVM-r, LSTM-r) corresponds to the type of training used. Comparing the regular and translated columns for the three algorithms, it shows that in 22 out of 30 cases, using translated instances as extra training data resulted in an improvement. For the semi-supervised learning approach, an improvement is found in 15 out of 16 cases. Moreover, our best individual model for each subtask (bolded scores in Table TABREF18 ) is always either a translated or semi-supervised model. Table TABREF18 also shows that, in general, our feed-forward network obtained the best results, having the highest F-score for 8 out of 10 subtasks. However, Table TABREF19 shows that these scores can still be improved by averaging or ensembling the individual models. On the dev set, averaging our 8 individual models results in a better score for 8 out of 10 subtasks, while creating an ensemble beats all of the individual models as well as the average for each subtask. On the test set, however, only a small increase in score (if any) is found for stepwise ensembling, compared to averaging. Even though the results do not get worse, we cannot conclude that stepwise ensembling is a better method than simply averaging. Our official scores (column Ens Test in Table TABREF19 ) have placed us second (EI-Reg, EI-Oc), fourth (V-Reg) and fifth (V-Oc) on the SemEval AIT-2018 leaderboard. However, it is evident that the results obtained on the test set are not always in line with those achieved on the development set. Especially on the anger subtask for both EI-Reg and EI-Oc, the scores are considerably lower on the test set in comparison with the results on the development set. Therefore, a small error analysis was performed on the instances where our final model made the largest errors. Error Analysis Due to some large differences between our results on the dev and test set of this task, we performed a small error analysis in order to see what caused these differences. For EI-Reg-anger, the gold labels were compared to our own predictions, and we manually checked 50 instances for which our system made the largest errors. Some examples that were indicative of the shortcomings of our system are shown in Table TABREF20 . First of all, our system did not take into account capitalization. The implications of this are shown in the first sentence, where capitalization intensifies the emotion used in the sentence. In the second sentence, the name Imperator Furiosa is not understood. Since our texts were lowercased, our system was unable to capture the named entity and thought the sentence was about an angry emperor instead. In the third sentence, our system fails to capture that when you are so angry that it makes you laugh, it results in a reduced intensity of the angriness. Finally, in the fourth sentence, it is the figurative language me infla la vena (it inflates my vein) that the system is not able to understand. The first two error-categories might be solved by including smart features regarding capitalization and named entity recognition. However, the last two categories are problems of natural language understanding and will be very difficult to fix. Conclusion To conclude, the present study described our submission for the Semeval 2018 Shared Task on Affect in Tweets. We participated in four Spanish subtasks and our submissions ranked second, second, fourth and fifth place. Our study aimed to investigate whether the automatic generation of additional training data through translation and semi-supervised learning, as well as the creation of stepwise ensembles, increase the performance of our Spanish-language models. Strong support was found for the translation and semi-supervised learning approaches; our best models for all subtasks use either one of these approaches. These results suggest that both of these additional data resources are beneficial when determining emotion intensity (for Spanish). However, the creation of a stepwise ensemble from the best models did not result in better performance compared to simply averaging the models. In addition, some signs of overfitting on the dev set were found. In future work, we would like to apply the methods (translation and semi-supervised learning) used on Spanish on other low-resource languages and potentially also on other tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How was the training data translated? Only give me the answer and do not output any other words.
Answer:
[ "using the machine translation platform Apertium ", "machine translation platform Apertium BIBREF5" ]
276
3,036
single_doc_qa
68e02fc2fc5044b28d909e884b0af3a8
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: The major actions taken by the board of directors and council during the national meeting in Chicago were reported in C&EN, April 30 (page 32). The Society Committee on Budget & Finance met on Saturday, March 24, to review the society's 2006 financial performance. The society ended 2006 with a net contribution from operations of $12.2 million, on revenues of $424.0 million and expenses of $411.8 million. This was $7.8 million favorable to the approved budget. After including the results of the Member Insurance Program and new ventures, the society's overall net contribution for 2006 was $11.5 million, which was $7.4 million favorable to the approved budget. The favorable variance was primarily attributable to higher than budgeted electronic services revenue and investment income, as well as expense savings from lower than budgeted health care costs and reduced IT spending. In addition, the society ended the year in compliance with the board-established financial guidelines. The Society Committee on Education (SOCED) received an update from President Catherine Hunt on the thematic programming featured in Chicago focusing on the sustainability of energy, food, and water. President-Elect Bruce Bursten solicited input from the committee pertaining to the central role of education in his agenda. SOCED received a presentation from the Membership Affairs Committee on its white paper on membership requirements. Committee members strongly support the proposal to include undergraduates as members of the society, but they requested that financial arrangements be clearly spelled out in the petition to ensure that the highly successful Student Affiliates program remains intact. The committee discussed the Education Division programs that were reviewed in 2006 and those that will be reviewed in 2007, under the auspices of the Program Review Advisory Group. SOCED received an update from the Committee on Professional Training regarding the draft ACS guidelines for approval of bachelor's degree programs in chemistry. Committee members discussed the report prepared by the Globalization Task Force, focusing on those sections relevant to education. The committee suggested initiatives related to the new ACS strategic plan, including a potential program that would engage retired chemists in the K-12 classroom. SOCED created a task force to consider the role of online, or "virtual," simulations in the chemistry laboratory, recognizing the value of online/virtual experiences as a supplement to, but not a replacement for, hands-on laboratory experiments. The committee ratified two interim actions taken since the Dec. 3, 2006, meeting: to remove the financial restriction of the ACS Petroleum Research Fund (ACS PRF) Supplement for Underrepresented Minority Research Programs (SUMR) and to contact nominators whose nominations for the Volunteer Service Award had expired and to invite them to reactivate their nomination packet for the 2008 Volunteer Service Award. Acting under delegated authority, the committee voted to accept the recommendations of the ACS Petroleum Research Fund Advisory Board (February 2007 meeting) for funding grants totaling $5.2 million; voted to recommend to the board a screened list of six nominees (due to a two-way tie for fifth place) for the 2008 Priestley Medal; voted to recommend to the board a screened list of five nominees for the 2008 Award for Volunteer Service to ACS; on the recommendation of the ACS Committee on Frasch Foundation Grants, voted to recommend to the board that it recommend to the trustee (US Trust) of the Frasch Foundation 12 grants for research in agricultural chemistry for the period of 2007–12; voted to recommend to the ACS Board of Directors that a new national award be established, the "ACS Award for Affordable Green Chemistry," sponsored by Rohm and Haas; and voted to recommend to the ACS Board of Directors that a new endowment be established, the "Affordable Green Chemistry Endowment Fund," to support the award. The committee also reviewed the final report from the Special Board Task Force on the Review of the ACS National Awards Program, chaired by Ronald Breslow; established a Canvassing & Selection Subcommittee; and reviewed a list of external awards for which ACS may want to nominate candidates. The committee agreed to include the list of significant external awards in the awards locator database that is being developed. The committee was updated on efforts to reconcile ACS's technical divisions' desires to leverage national meeting content using the Internet with our journal editors' concerns about prior publication issues. A conference call on this issue was scheduled for April 21, 2007. The committee received a presentation on the recent actions of the ACS Board of Directors International Strategy Group (ISG). The group's charge is to develop recommendations for a short- and long-term international strategy for the society. The committee was updated on the status of the activities of the Board Oversight Group on Leadership Development (BOG). Potential solutions for the unexpectedly high cost of facilitator training and transitioning from the current Leaders Conference format to the newly designed curriculum were presented to the committee. The committee reviewed plans for conducting the 2007 Membership Satisfaction Survey. Preliminary results are expected in May or June with a final report to be delivered to the board at the 2007 Boston national meeting. The committee received a briefing on the status of the MORE Project: Multidisciplinary Opportunities though Resource Enhancement. Twenty-eight proposals were received, and a decision on which proposals to support will be made in early May. The chair led a discussion on draft 2007 committee goals, and committee members offered several suggestions related to successfully meeting them. One suggestion was to modify a communications goal to make it more completely reflect the duties of the committee outlined in the board regulations. The chair and committee members will examine the suggestion and revisit the question after the board retreat where board committee duties will be examined. ACS President Hunt discussed her 2007-08 Presidential Task Force on Enhancing Science & Technology, which is charged with developing advocacy best practices that can enhance ACS's attainment of its public policy priorities. The task force is composed of a diverse set of ACS members as well as former U.S. Representative and chairman of the House Science Committee, Sherwood Boehlert, who will cochair the task force. • Results of the 2007 Public Policy Priorities Survey, which resulted in a four-tiered ranking of ACS's 2007 public policies. The ranking will help focus staff resources in conducting outreach and advocacy on behalf of ACS members. • The hiring of a communications consulting firm for 2007 to assist ACS in implementing the initial phase of the ACS Strategic Communications Plan. • Creation of a pilot ACS state government affairs advocacy program. Committee members agreed to the creation of a pilot, and staff will propose an initial list of states, policy focus, and a budget to carry out the program. The committee met in executive session on March 23 and in open session jointly with the Joint Board-Council Committee on Publications and the Division of Chemical Information on March 26. The committee heard from Chemical Abstracts Service (CAS) management on a range of issues including a report on continuing database building efforts, product enhancements, and CAS's centennial celebration plans. The Committee on Chemical Safety (CCS) provides advice on the handling of chemicals and seeks to ensure safe facilities, designs, and operations by calling attention to potential hazards and stimulating education in safe practices. CCS has several publications (many downloadable), including the flagship publication, "Safety in Academic Chemistry Labs" (SACL). Work has recently started on the translation of SACL into Arabic. This is in addition to the online Spanish version of SACL. Also online are the "Student Lab Code of Conduct for Secondary Science Programs" and a security vulnerability analysis checklist. A K-12 restricted hazardous substances list is under development. The third edition of the "Chemical Safety Manual for Small Businesses" will be ready soon. The committee's Task Force on Laboratory Environment, Health & Safety is working on a new edition of "Laboratory Waste Management." Task force members also commented on the recent Environmental Protection Agency Proposed Rule for Hazardous Waste in Academic Laboratories. Our Video Safety Resources Task Force is developing video resources to be distributed over the Web. CCS has been involved in collaborations for the updating of publications like "Prudent Practices in the Laboratory" and "ACS Guidelines for the Teaching of High School Chemistry." Along with other ACS units, CCS is exploring participating in the EPA's School Chemicals Cleanout Campaign. The Committee on Chemists with Disabilities (CWD) met at the 233rd ACS national meeting, Chicago, on Monday, March 26. Judy Summers-Gates reported on the Joint Subcommittee on Diversity meeting. This subcommittee is made up of representatives of the five committees that support people in chemistry (as opposed to a category of the profession): CWD, Committee on Minority Affairs, Committee on Technician Affairs, Women Chemists Committee, and Younger Chemists Committee, and its goal is to develop ways to coordinate the efforts of the five groups. The CWD Ambassador Program that was announced at CWD's 25th anniversary celebration at the Washington, D.C., meeting was discussed. Zelda Wasserman reported on the status of the letter from CWD to the ACS Board regarding captioning of ACS video materials. Janelle Kasper-Wolf, of ACS staff, discussed adding new questions to the ACS annual employment salary survey to obtain information for the committee. At the Chicago national meeting, the Committee on Community Activities (CCA) partnered with the ACS Education Division and the Office of the President to host "Chemistry In Action—It's Easy Being Green" at the Peggy Notebaert Nature Museum on Saturday, March 24. More than 250 children participated in the hands-on activities focused on recycling. ACS President Hunt presented a Salutes to Excellence plaque to the museum for its dedication to community outreach. The Chemists Celebrate Earth Day celebration occurred in 120 local sections with 138 coordinators leading the efforts within their communities. This represents an increase of more than 30% in local section and coordinator participation from 2006. CCA was featured in C&EN's April 16th issue on page 53. A shortcut to CCA's homepage was created: chemistry.org/committees/cca.html. During the Boston national meeting, CCA and the Office of Community Activities will celebrate National Chemistry Week's 20th Anniversary and its theme, "The Many Faces of Chemistry." A special outreach event is being planned for Sunday, Aug. 19. Hands-on activities will focus on health and wellness. The Committee on Corporation Associates (CCA) advises and influences ACS to ensure that its products and services are of value to industrial members and their companies. CCA vice chair, Roslyn White (SC Johnson), provided an overview of recent interactions between Corporation Associates and the U.K.-based Society of Chemical Industry (SCI). CCA gave feedback to a recommendations report from the ACS Board Committee on Professional & Member Relations Task Force on Globalization. Presentations were also received from the ACS Green Chemistry Institute and SCI. Staff reported on the Department of Industry Member Programs' activities since the San Francisco meeting. The report covered the Regional Industrial Innovation Awards, the World Congress on Industrial Biotechnology, the Analytical Pavilion sponsored by C&EN, and the ACS/Pharma Leaders Meeting. The Awards/Finance & Grants Subcommittee reported that CCA received two funding proposals that total $7,500. Funding was provided to the following: The Committee on Economic & Professional Affairs at $3,000 for the Chicago symposium on "Benefits Trends for the Chemical Workforce" and the Office of Graduate Education and the Department of Career Development & Management at $4,500 for a workshop on "Preparing for Life after Graduate School," to be held in conjunction with the 39th Central Regional Meeting. The subcommittee also requested that ACS staff provide CCA with an official annual statement of Corporation Associates' financial reserves as of Jan. 1 of each year. The Programs Subcommittee reported on planned programming activities in 2007 and beyond between CCA and SCI. The subcommittee gave an update on a Boston symposium cosponsored by Corporation Associates and the Medicinal Chemistry Division featuring past ACS Heroes of Chemistry from the pharmaceutical industry. By request of the subcommittee, representatives from Chemical Abstracts Service gave an overview on AnaVist—a tool with potential applications for CCA's efforts to provide member companies with industry-relevant information and reports. The subcommittee also requested that CCA earmark approximately $20,000 of Corporation Associates funds in 2008 for its activities. The Educational Outreach Subcommittee reported on its decision to collaborate with the Graduate Student Symposium Programming Committee of the Chemical Education Division on a graduate student industry roundtable program in Boston. The subcommittee requested $5,000 in support of this effort. The subcommittee also discussed a request for corporate executive support of an American Association of Physics Teachers initiative to promote undergraduate research. The Committee on Environmental Improvement (CEI) continues to be focused on the sustainability of the chemical enterprise. In Chicago, the committee introduced a multiyear effort to make ACS meetings more sustainable. This effort is designed to take advantage of the large size and diverse programs of the society to lead in the sustainability arena by "walking the talk." The committee held a dialogue with representatives of the U.S. Environmental Protection Agency who are trying to "green" federal conferences and work with the travel and tourism industry to change practices and shrink the environmental footprint of meetings. The committee also used advertising and student volunteers to engage individual meeting participants in a campaign to increase recycling by asking "Are you sustainable?" Moving forward, CEI looks forward to working closely with the Committee on Meetings & Expositions to advance this agenda. CEI was also pleased to participate in the meeting theme on sustainability through the ACS presidential programming. CEI cohosted the Monday presidential luncheon to discuss sustainability issues with the Committee on Science and is leading the follow-up to that luncheon, which will include recommendations on advancing sustainability in the three focal areas of the meeting—energy, food, and water. The committee also continued its dialogue with the Committee on Corporation Associates about a collaborative workshop. This activity, tentatively slated for the New Orleans meeting, will seek additional insights from chemical and allied products companies about public policy barriers that limit adoption of more sustainable products and practices as well as policy incentives that would lead to increased sustainability in the chemical enterprise. At its Chicago meeting, the committee welcomed the president of the Jordanian Chemical Society and the past-president of the Arab Union of Chemists. The committee was briefed on Pittcon 2007, where, with financial support from the Society of Analytical Chemists of Pittsburgh, ACS coordinated participation of a scientific delegation from Adriatic nations. The committee heard reports on the 2007 Frontiers of Chemical Science III: Research & Education in the Middle East meeting; the 2007 Transatlantic Frontiers of Chemistry meeting, which was jointly sponsored by ACS, the German Chemical Society, and the Royal Society of Chemistry; planned workshops to engage U.S. and Chinese early-career scientists in chemical biology, supramolecular, and new materials chemistry; and ACS Discovery Corps U.S./Brazil Research Collaboration Project in Biomass Conversion to Biofuels, Biomaterials & Chemicals. The committee discussed Latin American engagement opportunities created through Puerto Rico's involvement in three key chemical science events there: the 2009 ACS Southeast Regional Meeting, the 2008 Federation of Latin American Chemical Associations (FLAQ) meeting, and the proposed IUPAC 2011 Congress & General Assembly. The committee heard reports on letter-writing efforts by the ACS president to government officials in Libya and Mexico expressing concerns about challenges to the scientific freedom and human rights of scientists there. The Committee on Minority Affairs (CMA) approved new vision, mission, and values statements at the Chicago national meeting. The mission of CMA is to increase the participation of minority chemical scientists and influence policy on behalf of minorities in ACS and the chemical enterprise. An aggressive new strategic plan was approved by CMA to guide its activities over the next three years. By the end of 2009, CMA will increase the number of ACS Scholars that graduate to 100 per year, add 100 new minorities to leadership positions in ACS, engage in several collaborations, and increase the number of minority members of ACS by 5,000. CMA will focus initially on increasing minorities in ACS leadership. In working toward this goal, CMA began work on two new leadership-development programs for minority chemists. CMA continues to support the work of the Joint Subcommittee on Diversity (JSD) in developing programs, products, and services to ensure full participation of all members in ACS. In Chicago, JSD premiered a diversity booth at the meeting exposition hall and cosponsored symposia. The Committee on Patents & Related Matters (CPRM) discussed proposed legislative and regulatory changes to the U.S. patent system as well as open-access legislation and the potential effects such matters might have on industry and academia as well as on ACS. CPRM also continued its work on several new educational tools to assist and inform members on patent issues and other intellectual property matters important to a successful career in the chemical enterprise. Many of these tools are now available on the committee's expanded website, membership.acs.org/C/CPRM/. At the March 2007 meeting, the Committee on Professional Training (CPT) reviewed 42 new and additional information reports from ACS-approved chemistry programs. CPT held conferences with four schools seeking approval, discussed three updates and five site visit reports, and approved three new schools. The total number of ACS-approved chemistry programs is now 642. The committee released the full draft of the ACS guidelines for review and comment. Copies of the draft were distributed to the department chairs at all ACS-approved schools, the chairs of all ACS committees, and the chairs of all ACS technical divisions. Several CPT members met with the ACS technical divisions during the Chicago meeting to present an overview of the draft and obtain feedback. The draft guidelines document is available on the CPT website, and the committee invites any comments to be sent to [email protected]. In other business, the committee continued development of the two workshops with minority-serving institutions that will be held in 2007. The committee reviewed the editorial policies for the 2007 edition of the ACS Directory of Graduate Research, which is using a new protocol for retrieving research publication titles in an effort to improve the accuracy of the directory. C&EN finished 2006 with an exceptionally strong editorial package. The first months of 2007 are proving to be equally successful in fulfilling the magazine's mission of keeping its readers informed. On the advertising side, revenues in 2006 increased for the second year in a row, and the early months of 2007 show continuing positive signs. The most significant editorial achievement was the successful launch of the redesign of the print edition of C&EN with the Oct. 16, 2006, issue. The Subcommittee on Copyright has successfully updated the Copyright Module on the ACS Publications website. The subcommittee is looking into the possibility of conducting copyright programs at future ACS national and regional meetings. The final monitoring reports for Chemistry of Materials, Journal of Agricultural & Food Chemistry, and Molecular Pharmaceutics were presented and accepted by the committee. Journal of Chemical Information & Modeling, Organic Letters, Accounts of Chemical Research, and the Journal of Chemical Theory & Computation will be monitored next. 3. Examining the scientific basis of public policies related to the chemical sciences and making recommendations to the appropriate ACS units. In the first of these areas, ComSci partnered with President Hunt and the Committee on Environmental Improvement in planning and hosting a sustainability luncheon that featured roundtable discussions centering on a key sustainability question. At the Boston national meeting, ComSci will deliver a full-day program on the subject of "Partnerships in Innovation & Competitiveness." Regarding the second thrust, ComSci will present two programs in Boston: a box lunch that will feature two speakers taking opposing sides on the subject of "Genetic Screening & Diagnostic Testing: Do You Really Want to Know?" and a symposium titled "Creating & Sustaining International Research Collaborations." In support of the last thrust, ComSci is planning two events for 2008: "Balancing Security & Openness" will gather data to determine if the recent emphasis on security is hindering scientific progress and "Transitioning Chemical Science to Commercially Successful Products." The Women Chemists Committee (WCC) hosted more than 70 attendees at its open meeting recently in Chicago, where representatives from Iota Sigma Pi, Women in Science & Engineering, the Association of Women in Science, and the Chicago local section helped WCC celebrate the committee's 80th anniversary. The Women in Industry Breakfast was also highly successful with a new format of speed networking. More than 100 participants had the opportunity to practice their elevator speeches and make several professional connections. A related workshop will be offered by WCC in Boston. In Chicago, WCC sponsored two symposia, "Women Achieving Success: The ACS as a Platform in Leadership Development" in honor of Madeleine Joullié's 80th birthday and the ACS Award for Encouraging Women into Careers in the Chemical Sciences: Symposium in Honor of Bojan H. Jennings. More than 225 ACS meeting attendees were present for the biannual WCC Luncheon and heard the keynote speaker Laura Kiessling, 2007 Francis P. Garvan-John Olin Medal Recipient. Twelve women presented their research at this meeting with funding by the WCC/Eli Lilly Travel Grant Award. WCC members also spent time educating expo attendees on programs offered by the ACS Office of Diversity Programs at its new booth. In Chicago, the Younger Chemists Committee (YCC) welcomed its new committee members with an information session centered on YCC's charter as well as on its strategic plan: to make ACS relevant to younger chemists, to involve younger chemists in all levels of the society, and to integrate younger chemists into the profession. In January, YCC again hosted a Leadership Development Workshop during the ACS Leaders Conference. There were more than 80 applications for the 15 awards, which covered travel and registration for the conference. YCC plans to again fund the travel awards and provide leadership training for young chemists in 2008. YCC also solicited applications and selected a new graduate student representative on the Graduate Education Advisory Board. During the Chicago meeting, YCC programs included "Starting a Successful Research Program at a Predominantly Undergraduate Institution," "Career Experiences at the Interface of Chemistry & Biology," and "Chemistry Pedagogy 101." In addition to these programs, YCC cosponsored five programs with various committees and divisions. YCC continues to reach out to ACS committees and divisions and has initiated liaisonships with 11 technical divisions to encourage technical programming that highlights the contributions of younger chemists. Looking forward to Boston, YCC is planning symposia including "The Many Faces of Chemistry: International Opportunities for Chemists"; "Being a Responsible Chemist: Ethics, Politics & Policy"; and "Changing Landscapes of the Bio-Pharma Industry." The Committee on Committees (ConC) conducted its annual training session for new national committee chairs at the ACS Leaders Conference in January 2007. ConC's interactive session for committee chairs in Chicago served as an opportune follow-on and a forum for informative interchange among seasoned and new chairs. ConC began developing its recommendations for the 2008 committee chair appointments for consideration by the president-elect and chair of the board. ConC continues to focus efforts to identify members with the skills and expertise specified by the committee chairs using the councilor preference form. The form will be sent to councilors in May. ConC also seeks the names of noncouncilor members for consideration for service on council-related committees, especially those with no prior appointment. As part of ongoing activities with the joint CPC-Board Governance Review Task Force, ConC has collected data on committee liaisons to other committees. This information will be distributed to committee chairs. The number of liaisons indicates that unofficial but strong communication channels exist within the ACS committee structure. On Sunday evening, the Committee on Nominations & Elections (N&E) sponsored its fifth successful Town Hall Meeting for President-Elect Nominees. An estimated 200 people attended this session. This forum facilitated communication among the 2008 president-elect nominees, councilors, and other members. N&E will hold another Town Hall Meeting featuring the candidates for director-at-large at the fall meeting in Boston. Now that voting over the Internet has become an accepted procedure for ACS national elections, the ACS technical divisions and local sections have expressed strong interest in using this method for their elections. N&E has developed protocols for elections for local sections and divisions. This document will be forwarded to the appropriate committees for their review and distribution. N&E is responsible for reviewing annually the distribution of member populations within the six electoral districts to ensure that the districts have equitable representation. According to bylaw V, section 4(a), the member population of each electoral district must be within 10% of the result of dividing by six the number of members whose addresses lie within these districts. The committee is happy to report that the six electoral districts are in compliance. The committee has developed a petition on election procedures for president-elect and district director. The proposed election mechanism provides for a preferential (ranked) ballot and an "instant runoff." N&E continues to address the areas of campaigning and the timing of our national election process. Between the Chicago and Boston meetings, the committee plans to sponsor online forums for input from councilors and other interested members on these issues. In response to member concerns regarding the collection of signatures for petition candidates, N&E reviewed the society's bylaws. The bylaws state that an endorsement is required, but does not stipulate the method of endorsement. N&E has determined that original or electronic signatures are acceptable and will establish appropriate procedures for receipt of electronic signatures. The Committee on Constitution & Bylaws (C&B), acting for the council, issued new certified bylaws to the Corning Section, the Portland Section, the Division of Colloid & Surface Chemistry, and the Division of Chemical Education. The committee reviewed new proposed amendments for the Division of Medicinal Chemistry, the Columbus Section, the Detroit Section, and the Southern Arizona Section. Three petitions were presented to council for action at this meeting. Regarding the "Petition on Election Procedures 2006," a motion to separate the petition was approved, and the petition was divided. Provisions affecting bylaw V, sec. 2d, bylaw V, sec. 3c, and bylaw V, sec. 4f, which deal with election procedures and the timing of run-off elections, were approved by council and will become effective following confirmation by the board of directors. The second part of the petition regarding bylaw V, sec. 2c, and bylaw V, sec. 3b, which deal with signature requirements for petition candidates for president-elect and director-at-large respectively, was recommitted to the Committee on Nominations & Elections, which has primary substantive responsibility for the petition. The Committee on Nominations & Elections was asked to reconsider the signature requirements, procedures for acceptance of electronic signatures, and recommendations from the Governance Review Task Force on election procedures. The second petition presented to council for action was the "Petition on Rules for Nominating Members of N&E for National Offices." This petition was not approved by council. The third petition, the "Petition on Multiyear Dues," was amended by incidental motion on the council floor, calling for the petition to become effective when technical components are instituted to track payments, but no later than Jan. 1, 2010. Council approved the incidental motion and then approved the petition. The committee reviewed one petition for consideration, the "Petition on Local Section Affiliations," which will be submitted to council for action at the fall 2007 meeting in Boston. The committee met with representatives of the Committee on Membership Affairs and the Governance Review Task Force to continue discussions on proposals currently being formulated on membership requirements and student membership. In addition, the committee discussed election issues of concern to the Southern California Section. We hope you enjoyed the presidential and division thematic program, "Sustainability of Energy, Food & Water" in Chicago. A small, dedicated group of volunteers and staff labored tirelessly to create and coordinate this programming; to them the Committee on Divisional Activities (DAC) offers sincere thanks. DAC has committed to transfer the process of choosing and organizing future national meeting themes to a body that represents all divisions. We made substantial progress in Chicago, where division, secretariat, and committee representatives convened to discuss national meeting program concepts. They proposed themes for the 2008 Philadelphia national meeting as well as a framework for a future national programming group. Divisions have successfully served their members fortunate enough to attend national meetings. To maximize benefits to division members, DAC encourages divisions to consider extending the reach of the content they deliver at national meetings through Internet-based distribution channels and will support worthy efforts in this direction via Innovative Program Grants. The committee voted in Chicago to propose modifications to the division funding formula that will more greatly reward interdisciplinary programming. The new formula will also be simpler and more transparent to divisions. DAC will present the revised plan to council for action in Boston. The Committee on Economic & Professional Affairs (CEPA), working with ACS staff in the Departments of Career Management & Development and Member Research & Technology, continues to update and implement its strategic plan to address the career needs of society members. Specifically, the committee reviewed and revised existing workshops and materials to help ACS members get jobs. CEPA is developing new programs to address the needs of mid- and late-career chemists to ensure their continued competitiveness in the workplace and to ease their career transitions. New initiatives in these areas include the development of workshops, online training, surveys to assess member needs, suggested changes to public policies, and updates to professional and ethical workplace guidelines. As a result of discussions at the Public Policy Roundtable, which was held in San Francisco, a background paper is being developed on trends in health care issues. The newly revised "Chemical Professional's Code of Conduct" was presented to council, which approved it. The Standards & Ethics Subcommittee is preparing a revision of the "Academic Professional Guidelines" to be presented to council for consideration in Boston. CEPA reviewed the Globalization Task Force Report. As our science diffuses around the globe, we want to make sure that our members are aware of the economic and professional challenges they will face and that they have the tools they need to succeed. Therefore, CEPA made a commitment to work with other committees, divisions, and ACS staff to develop programs and policies that position our membership to compete in the global workforce. CEPA heard and discussed a presentation on the proposal from the Membership Affairs Committee on broadening the requirements of membership. CEPA supports the spirit of this proposal and encourages further detailed studies to assess financial impacts on local sections and student affiliates chapters. The Local Section Activities Committee (LSAC) recognized local sections celebrating significant anniversaries in 2007, including Savannah River (50 years), Northeast Tennessee (75 years), and the St. Louis and Syracuse local sections (both celebrating 100 years). LSAC hosted the local section leaders track in conjunction with the ACS Leaders Conference in Baltimore on Jan. 26–28. A total of 135 delegates from 124 local sections participated in the weekend leadership conference. LSAC also hosted a Local Section Summit on March 2–4 in Arlington, Va. The summit focused on practical operational issues that will support local sections' long-term success. Specific areas that were discussed include the development of a multiyear plan to expand or develop programming for local sections, opportunities to encourage innovation and experimentation within and among local sections, and capitalizing on existing opportunities to facilitate partnerships between local sections and other ACS groups. Following the San Francisco national meeting, LSAC launched a local section Science Café minigrant program. Fifty-five local sections accepted LSAC invitation to host Science Cafés in 2007. A DVD entitled "ACS Close to Home: Local Sections Connecting Chemistry & the Community" was released earlier this year. The video provides a seven-minute overview of the many outreach and educational programs sponsored by local sections and the critical role they play in positively influencing the public's perception of chemistry and its practitioners. Copies of the DVD were sent to all local section officers. The Committee on Meetings & Expositions (M&E) reported that the 233rd ACS national meeting hosted 14,520 attendees. This included 7,152 chemical scientists, 5,059 students, 1,283 exhibitors, 119 precollege teachers, 573 exposition visitors, and 453 guests. The exposition had 424 booths with 268 companies. The 10 2006 regional meetings set a new standard for excellence with attendance exceeding 8,000, a 30% increase in average meeting attendance compared to the 2005 meetings. A total of 4,717 abstracts were submitted. A region summit was held in February at which the final report of the ReACT study group was reviewed. The practice of tracking the number of presenter no-shows continues. M&E will collaborate with the Committee on Divisional Activities to study options for addressing this problem. Suggestions will be presented at the Boston meeting for implementation in 2008. It is the intent of M&E to pursue the goal of making our meetings "greener." We will communicate with staff and governance units to identify actions for both the short and long term. The American Institute of Chemical Engineers (AIChE) and ACS will hold their 2008 spring meetings simultaneously in New Orleans. An ad hoc working group consisting of members from M&E, DAC, and AIChE are actively exploring joint programming opportunities for this meeting. The Committee on Membership Affairs (MAC) met in executive session on Saturday and Sunday in Chicago and reported that the ACS closed 2006 with 160,491 members, our highest year-end membership count since 2002. Of the 17,857 applications processed in 2006, more than 1,000 came from the Member-Get-a-Member campaign in which many councilors participated. The society's retention rate in 2006 remained strong at 92%. The committee also reported that recruitment for the first two months of 2007 netted 2,844 new applications—729 more than for the same time period last year. MAC continues to work with deliberate speed on the proposed new bylaw language for members, student members, and society affiliates-the three ways to connect to the society. The committee received input from the Governance Review Task Force and its action teams, the Council Policy Committee, the board of directors, the Committee on Constitution & Bylaws, and several other committees between the San Francisco and Chicago meetings. These interactions have resulted in the current bylaw change recommendations. In Chicago, representatives from MAC attended several committee meetings and all seven councilor caucuses to summarize the current proposal for membership changes, answer questions, and seek input. In addition, all committee chairs were invited to have their respective committees review these bylaw changes and respond to MAC—if possible—before council met on Wednesday. MAC received 11 responses: eight supported the proposed changes as is, and three supported the proposed language with specified changes or considerations. The comprehensive petition will likely represent the most significant and voluminous change in the ACS bylaws that has occurred in decades, and MAC is proud to be among the leaders in its development and in efforts to get it right the first time. Hundreds of individuals have contributed to this major effort, since MAC began such discussions at the spring 2004 national meeting. The Committee on Ethics met in Chicago and discussed the possibility of organizing and scheduling a committee retreat in the near future to enable the committee to move from the current stage of exploring the needs and interests of ACS members to setting priorities for the next few years. The Project SEED program offers summer research opportunities for high school students from economically disadvantaged families. Since its inception in 1968, the program has had a significant impact on the lives of more than 8,400 students. At the selection meeting in March, the committee approved research projects for 340 SEED I students and 98 SEED II students for this summer in more than 100 institutions. The 2006 annual assessment surveys from 300 students indicate that 78% of the Project SEED participants are planning to major in a chemistry-related science, and 66% aspire to continue to graduate education. This program is made possible by contributions from industry, academia, local sections, ACS friends and members, the ACS Petroleum Research Fund, and the Project SEED Endowment. The committee formally submitted a request to ConC to amend the Project SEED acronym and the committee duties described in the Supplementary Information of the "ACS Charter, Constitution, Bylaws & Regulations." In Chicago, the committee's agenda focused on the ACS Strategic Plan and how Project SEED fits into it, the Program Review Advisory Group (PRAG) review of the Project SEED program, the committee's review of an online application form, and planning of the 40th anniversary celebration to be held at the Philadelphia meeting in the fall of 2008. The committee selected a task force to review the criteria for selection of the Project SEED ChemLuminary Award. 3. Making ACS relevant to technicians. Last year, CTA, along with the Division of Chemical Technicians, the Committee on Economic & Professional Affairs, and ChemTechLinks, started the Equipping the 2015 Chemical Technology Workforce initiative. This year, the initiative awarded six $500 minigrants to activities and programs that support the educational and professional development of chemical technicians. We are pleased to announce that the winners of the minigrants are the ACS Division of Environmental Chemistry; the Chemical Technician Program Chair for the 39th ACS Central Regional Meeting in Covington, Ky.; Delta College, University Center, Mich.; Grand Rapids Community College, in Michigan; Mount San Antonio College, Walnut, Calif.; and Southwestern College in Chula Vista, Calif. The winners are collaborating with industry, academia, and ACS local sections on such activities as chemical technology career fairs for high school students, discussion panels on employability skills for technicians, and technical programming at regional and national meetings on the vital role technicians have in the chemical enterprise. Because of the enthusiastic response to the minigrants, Equipping the 2015 Chemical Technology Workforce will be supporting another round of minigrants to be distributed in the fall. Details will be available on the website. For more information, go to www.ChemTechLinks.org and click on "Equipping the 2015 Chemical Technology Workforce." CTA has also joined with the Joint Subcommittee on Diversity, formerly known as the Collaboration of Committees Working Group. Because this group is focused on increasing diversity in ACS and the chemical enterprise, we believe that this is an opportunity to raise awareness of the value of technicians. CTA looks forward to collaborating on the promotion of traditionally underrepresented chemical professionals. In 2007, CTA will be placing renewed focus on distribution of the ACS Chemical Technology Student Recognition Award. The award recognizes academic excellence in students preparing for careers as chemical technicians. For more information on the award, please visit the CTA website at chemistry.org/committees/cta. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How many people attend the 233rd ACS national meeting? Only give me the answer and do not output any other words.
Answer:
[ "There are 14,520 attendees, including 7,152 chemical scientists, 5,059 students, 1,283 exhibitors, 119 precollege teachers, 573 exposition visitors, and 453 guests." ]
276
8,026
single_doc_qa
7b887e1469ed4edbb6294774ce12d700
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The task of speculation detection and scope resolution is critical in distinguishing factual information from speculative information. This has multiple use-cases, like systems that determine the veracity of information, and those that involve requirement analysis. This task is particularly important to the biomedical domain, where patient reports and medical articles often use this feature of natural language. This task is commonly broken down into two subtasks: the first subtask, speculation cue detection, is to identify the uncertainty cue in a sentence, while the second subtask: scope resolution, is to identify the scope of that cue. For instance, consider the example: It might rain tomorrow. The speculation cue in the sentence above is ‘might’ and the scope of the cue ‘might’ is ‘rain tomorrow’. Thus, the speculation cue is the word that expresses the speculation, while the words affected by the speculation are in the scope of that cue. This task was the CoNLL-2010 Shared Task (BIBREF0), which had 3 different subtasks. Task 1B was speculation cue detection on the BioScope Corpus, Task 1W was weasel identification from Wikipedia articles, and Task 2 was speculation scope resolution from the BioScope Corpus. For each task, the participants were provided the train and test set, which is henceforth referred to as Task 1B CoNLL and Task 2 CoNLL throughout this paper. For our experimentation, we use the sub corpora of the BioScope Corpus (BIBREF1), namely the BioScope Abstracts sub corpora, which is referred to as BA, and the BioScope Full Papers sub corpora, which is referred to as BF. We also use the SFU Review Corpus (BIBREF2), which is referred to as SFU. This subtask of natural language processing, along with another similar subtask, negation detection and scope resolution, have been the subject of a body of work over the years. The approaches used to solve them have evolved from simple rule-based systems (BIBREF3) based on linguistic information extracted from the sentences, to modern deep-learning based methods. The Machine Learning techniques used varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), while the deep learning approaches included Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12). Figures FIGREF1 and FIGREF1 contain a summary of the papers addressing speculation detection and scope resolution (BIBREF13, BIBREF5, BIBREF9, BIBREF3, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF6, BIBREF11, BIBREF18, BIBREF10, BIBREF19, BIBREF7, BIBREF4, BIBREF8). Inspired by the most recent approach of applying BERT to negation detection and scope resolution (BIBREF12), we take this approach one step further by performing a comparative analysis of three popular transformer-based architectures: BERT (BIBREF20), XLNet (BIBREF21) and RoBERTa (BIBREF22), applied to speculation detection and scope resolution. We evaluate the performance of each model across all datasets via the single dataset training approach, and report all scores including inter-dataset scores (i.e. train on one dataset, evaluate on another) to test the generalizability of the models. This approach outperforms all existing systems on the task of speculation detection and scope resolution. Further, we jointly train on multiple datasets and obtain improvements over the single dataset training approach on most datasets. Contrary to results observed on benchmark GLUE tasks, we observe XLNet consistently outperforming RoBERTa. To confirm this observation, we apply these models to the negation detection and scope resolution task, and observe a continuity in this trend, reporting state-of-the-art results on three of four datasets on the negation scope resolution task. The rest of the paper is organized as follows: In Section 2, we provide a detailed description of our methodology and elaborate on the experimentation details. In Section 3, we present our results and analysis on the speculation detection and scope resolution task, using the single dataset and the multiple dataset training approach. In Section 4, we show the results of applying XLNet and RoBERTa on negation detection and scope resolution and propose a few reasons to explain why XLNet performs better than RoBERTa. Finally, the future scope and conclusion is mentioned in Section 5. Methodology and Experimental Setup We use the methodology by Khandelwal and Sawant (BIBREF12), and modify it to support experimentation with multiple models. For Speculation Cue Detection: Input Sentence: It might rain tomorrow. True Labels: Not-A-Cue, Cue, Not-A-Cue, Not-A-Cue. First, this sentence is preprocessed to get the target labels as per the following annotation schema: 1 – Normal Cue 2 – Multiword Cue 3 – Not a cue 4 – Pad token Thus, the preprocessed sequence is as follows: Input Sentence: [It, might, rain, tomorrow] True Labels: [3,1,3,3] Then, we preprocess the input using the tokenizer for the model being used (BERT, XLNet or RoBERTa): splitting each word into one or more tokens, and converting each token to it’s corresponding tokenID, and padding it to the maximum input length of the model. Thus, Input Sentence: [wtt(It), wtt(might), wtt(rain), wtt(tom), wtt(## or), wtt(## row), wtt(〈pad 〉),wtt(〈pad 〉)...] True Labels: [3,1,3,3,3,3,4,4,4,4,...] The word ‘tomorrow' has been split into 3 tokens, ‘tom', ‘##or' and ‘##row'. The function to convert the word to tokenID is represented by wtt. For Speculation Scope Resolution: If a sentence has multiple cues, each cue's scope will be resolved individually. Input Sentence: It might rain tomorrow. True Labels: Out-Of-Scope, Out-Of-Scope, In-Scope, In-Scope. First, this sentence is preprocessed to get the target labels as per the following annotation schema: 0 – Out-Of-Scope 1 – In-Scope Thus, the preprocessed sequence is as follows: True Scope Labels: [0,0,1,1] As for cue detection, we preprocess the input using the tokenizer for the model being used. Additionally, we need to indicate which cue's scope we want to find in the input sentence. We do this by inserting a special token representing the token type (according to the cue detection annotation schema) before the cue word whose scope is being resolved. Here, we want to find the scope of the cue ‘might'. Thus, Input Sentence: [wtt(It), wtt(〈token[1]〉), wtt(might), wtt(rain), wtt(tom), wtt(##or), wtt(##row), wtt(〈pad〉), wtt(〈pad〉)...] True Scope Labels: [0,0,1,1,1,1,0,0,0,0,...] Now, the preprocessed input for cue detection and similarly for scope detection is fed as input to our model as follows: X = Model (Input) Y = W*X + b The W matrix is a matrix of size n_hidden x num_classes (n_hidden is the size of the representation of a token within the model). These logits are fed to the loss function. We use the following variants of each model: BERT: bert-base-uncaseds3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz (The model used by BIBREF12) RoBERTa: roberta-bases3.amazonaws.com/models.huggingface.co/bert/roberta-base-pytorch_model.bin (RoBERTa-base does not have an uncased variant) XLNet: xlnet-base-caseds3.amazonaws.com/models.huggingface.co/bert/xlnet-base-cased-pytorch_model.bin (XLNet-base does not have an uncased variant) The output of the model is a vector of probabilities per token. The loss is calculated for each token, by using the output vector and the true label for that token. We use class weights for the loss function, setting the weight for label 4 to 0 and all other labels to 1 (for cue detection only) to avoid training on padding token’s output. We calculate the scores (Precision, Recall, F1) for the model per word of the input sentence, not per token that was fed to the model, as the tokens could be different for different models leading to inaccurate scores. For the above example, we calculate the output label for the word ‘tomorrow', not for each token it was split into (‘tom', ‘##or' and ‘##row'). To find the label for each word from the tokens it was split into, we experiment with 2 methods: Average: We average the output vectors (softmax probabilities) for each token that the word was split into by the model's tokenizer. In the example above, we average the output of ‘tom', ‘##or' and ‘##row' to get the output for ‘tomorrow'. Then, we take an argmax over the resultant vector. This is then compared with the true label for the original word. First Token: Here, we only consider the first token's probability vector (among all tokens the word was split into) as the output for that word, and get the label by an argmax over this vector. In the example above, we would consider the output vector corresponding to the token ‘tom' as the output for the word ‘tomorrow'. For cue detection, the results are reported for the Average method only, while we report the scores for both Average and First Token for Scope Resolution. For fair comparison, we use the same hyperparameters for the entire architecture for all 3 models. Only the tokenizer and the model are changed for each model. All other hyperparameters are kept same. We finetune the models for 60 epochs, using early stopping with a patience of 6 on the F1 score (word level) on the validation dataset. We use an initial learning rate of 3e-5, with a batch size of 8. We use the Categorical Cross Entropy loss function. We use the Huggingface’s Pytorch Transformer library (BIBREF23) for the models and train all our models on Google Colaboratory. Results: Speculation Cue Detection and Scope Resolution We use a default train-validation-test split of 70-15-15 for each dataset. For the speculation detection and scope resolution subtasks using single-dataset training, we report the results as an average of 5 runs of the model. For training the model on multiple datasets, we perform a 70-15-15 split of each training dataset, after which the train and validation part of the individual datasets are merged while the scores are reported for the test part of the individual datasets, which is not used for training or validation. We report the results as an average of 3 runs of the model. Figure FIGREF8 contains results for speculation cue detection and scope resolution when trained on a single dataset. All models perform the best when trained on the same dataset as they are evaluated on, except for BF, which gets the best results when trained on BA. This is because of the transfer learning capabilities of the models and the fact that BF is a smaller dataset than BA (BF: 2670 sentences, BA: 11871 sentences). For speculation cue detection, there is lesser generalizability for models trained on BF or BA, while there is more generalizability for models trained on SFU. This could be because of the different nature of the biomedical domain. Figure FIGREF11 contains the results for speculation detection and scope resolution for models trained jointly on multiple datasets. We observe that training on multiple datasets helps the performance of all models on each dataset, as the quantity of data available to train the model increases. We also observe that XLNet consistently outperforms BERT and RoBERTa. To confirm this observation, we apply the 2 models to the related task of negation detection and scope resolution Negation Cue Detection and Scope Resolution We use a default train-validation-test split of 70-15-15 for each dataset, and use all 4 datasets (BF, BA, SFU and Sherlock). The results for BERT are taken from BIBREF12. The results for XLNet and RoBERTa are averaged across 5 runs for statistical significance. Figure FIGREF14 contains results for negation cue detection and scope resolution. We report state-of-the-art results on negation scope resolution on BF, BA and SFU datasets. Contrary to popular opinion, we observe that XLNet is better than RoBERTa for the cue detection and scope resolution tasks. A few possible reasons for this trend are: Domain specificity, as both negation and speculation are closely related subtasks. Further experimentation on different tasks is needed to verify this. Most benchmark tasks are sentence classification tasks, whereas the subtasks we experiment on are sequence labelling tasks. Given the pre-training objective of XLNet (training on permutations of the input), it may be able to capture long-term dependencies better, essential for sequence labelling tasks. We work with the base variants of the models, while most results are reported with the large variants of the models. Conclusion and Future Scope In this paper, we expanded on the work of Khandelwal and Sawant (BIBREF12) by looking at alternative transfer-learning models and experimented with training on multiple datasets. On the speculation detection task, we obtained a gain of 0.42 F1 points on BF, 1.98 F1 points on BA and 0.29 F1 points on SFU, while on the scope resolution task, we obtained a gain of 8.06 F1 points on BF, 4.27 F1 points on BA and 11.87 F1 points on SFU, when trained on a single dataset. While training on multiple datasets, we observed a gain of 10.6 F1 points on BF and 1.94 F1 points on BA on the speculation detection task and 2.16 F1 points on BF and 0.25 F1 points on SFU on the scope resolution task over the single dataset training approach. We thus significantly advance the state-of-the-art for speculation detection and scope resolution. On the negation scope resolution task, we applied the XLNet and RoBERTa and obtained a gain of 3.16 F1 points on BF, 0.06 F1 points on BA and 0.3 F1 points on SFU. Thus, we demonstrated the usefulness of transformer-based architectures in the field of negation and speculation detection and scope resolution. We believe that a larger and more general dataset would go a long way in bolstering future research and would help create better systems that are not domain-specific. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What were the baselines? Only give me the answer and do not output any other words.
Answer:
[ "varied from Maximum Entropy Classifiers (BIBREF4) to Support Vector Machines (BIBREF5,BIBREF6,BIBREF7,BIBREF8), Recursive Neural Networks (BIBREF9,BIBREF10), Convolutional Neural Networks (BIBREF11) and most recently transfer learning-based architectures like Bidirectional Encoder Representation from Transformers (BERT) (BIBREF12)" ]
276
3,216
single_doc_qa
79b9bddce53244c39354a86cebb25fe8
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Decoding intended speech or motor activity from brain signals is one of the major research areas in Brain Computer Interface (BCI) systems BIBREF0 , BIBREF1 . In particular, speech-related BCI technologies attempt to provide effective vocal communication strategies for controlling external devices through speech commands interpreted from brain signals BIBREF2 . Not only do they provide neuro-prosthetic help for people with speaking disabilities and neuro-muscular disorders like locked-in-syndrome, nasopharyngeal cancer, and amytotropic lateral sclerosis (ALS), but also equip people with a better medium to communicate and express thoughts, thereby improving the quality of rehabilitation and clinical neurology BIBREF3 , BIBREF4 . Such devices also have applications in entertainment, preventive treatments, personal communication, games, etc. Furthermore, BCI technologies can be utilized in silent communication, as in noisy environments, or situations where any sort of audio-visual communication is infeasible. Among the various brain activity-monitoring modalities in BCI, electroencephalography (EEG) BIBREF5 , BIBREF6 has demonstrated promising potential to differentiate between various brain activities through measurement of related electric fields. EEG is non-invasive, portable, low cost, and provides satisfactory temporal resolution. This makes EEG suitable to realize BCI systems. EEG data, however, is challenging: these data are high dimensional, have poor SNR, and suffer from low spatial resolution and a multitude of artifacts. For these reasons, it is not particularly obvious how to decode the desired information from raw EEG signals. Although the area of BCI based speech intent recognition has received increasing attention among the research community in the past few years, most research has focused on classification of individual speech categories in terms of discrete vowels, phonemes and words BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . This includes categorization of imagined EEG signal into binary vowel categories like /a/, /u/ and rest BIBREF7 , BIBREF8 , BIBREF9 ; binary syllable classes like /ba/ and /ku/ BIBREF1 , BIBREF10 , BIBREF11 , BIBREF12 ; a handful of control words like 'up', 'down', 'left', 'right' and 'select' BIBREF15 or others like 'water', 'help', 'thanks', 'food', 'stop' BIBREF13 , Chinese characters BIBREF14 , etc. Such works mostly involve traditional signal processing or manual feature handcrafting along with linear classifiers (e.g., SVMs). In our recent work BIBREF16 , we introduced deep learning models for classification of vowels and words that achieved 23.45% improvement of accuracy over the baseline. Production of articulatory speech is an extremely complicated process, thereby rendering understanding of the discriminative EEG manifold corresponding to imagined speech highly challenging. As a result, most of the existing approaches failed to achieve satisfactory accuracy on decoding speech tokens from the speech imagery EEG data. Perhaps, for these reasons, very little work has been devoted to relating the brain signals to the underlying articulation. The few exceptions include BIBREF17 , BIBREF18 . In BIBREF17 , Zhao et al. used manually handcrafted features from EEG data, combined with speech audio and facial features to achieve classification of the phonological categories varying based on the articulatory steps. However, the imagined speech classification accuracy based on EEG data alone, as reported in BIBREF17 , BIBREF18 , are not satisfactory in terms of accuracy and reliability. We now turn to describing our proposed models. Proposed Framework Cognitive learning process underlying articulatory speech production involves incorporation of intermediate feedback loops and utilization of past information stored in the form of memory as well as hierarchical combination of several feature extractors. To this end, we develop our mixed neural network architecture composed of three supervised and a single unsupervised learning step, discussed in the next subsections and shown in Fig. FIGREF1 . We formulate the problem of categorizing EEG data based on speech imagery as a non-linear mapping INLINEFORM0 of a multivariate time-series input sequence INLINEFORM1 to fixed output INLINEFORM2 , i.e, mathematically INLINEFORM3 : INLINEFORM4 , where c and t denote the EEG channels and time instants respectively. Preprocessing step We follow similar pre-processing steps on raw EEG data as reported in BIBREF17 (ocular artifact removal using blind source separation, bandpass filtering and subtracting mean value from each channel) except that we do not perform Laplacian filtering step since such high-pass filtering may decrease information content from the signals in the selected bandwidth. Joint variability of electrodes Multichannel EEG data is high dimensional multivariate time series data whose dimensionality depends on the number of electrodes. It is a major hurdle to optimally encode information from these EEG data into lower dimensional space. In fact, our investigation based on a development set (as we explain later) showed that well-known deep neural networks (e.g., fully connected networks such as convolutional neural networks, recurrent neural networks and autoencoders) fail to individually learn such complex feature representations from single-trial EEG data. Besides, we found that instead of using the raw multi-channel high-dimensional EEG requiring large training times and resource requirements, it is advantageous to first reduce its dimensionality by capturing the information transfer among the electrodes. Instead of the conventional approach of selecting a handful of channels as BIBREF17 , BIBREF18 , we address this by computing the channel cross-covariance, resulting in positive, semi-definite matrices encoding the connectivity of the electrodes. We define channel cross-covariance (CCV) between any two electrodes INLINEFORM0 and INLINEFORM1 as: INLINEFORM2 . Next, we reject the channels which have significantly lower cross-covariance than auto-covariance values (where auto-covariance implies CCV on same electrode). We found this measure to be essential as the higher cognitive processes underlying speech planning and synthesis involve frequent information exchange between different parts of the brain. Hence, such matrices often contain more discriminative features and hidden information than mere raw signals. This is essentially different than our previous work BIBREF16 where we extract per-channel 1-D covariance information and feed it to the networks. We present our sample 2-D EEG cross-covariance matrices (of two individuals) in Fig. FIGREF2 . CNN & LSTM In order to decode spatial connections between the electrodes from the channel covariance matrix, we use a CNN BIBREF19 , in particular a four-layered 2D CNN stacking two convolutional and two fully connected hidden layers. The INLINEFORM0 feature map at a given CNN layer with input INLINEFORM1 , weight matrix INLINEFORM2 and bias INLINEFORM3 is obtained as: INLINEFORM4 . At this first level of hierarchy, the network is trained with the corresponding labels as target outputs, optimizing a cross-entropy cost function. In parallel, we apply a four-layered recurrent neural network on the channel covariance matrices to explore the hidden temporal features of the electrodes. Namely, we exploit an LSTM BIBREF20 consisting of two fully connected hidden layers, stacked with two LSTM layers and trained in a similar manner as CNN. Deep autoencoder for spatio-temporal information As we found the individually-trained parallel networks (CNN and LSTM) to be useful (see Table TABREF12 ), we suspected the combination of these two networks could provide a more powerful discriminative spatial and temporal representation of the data than each independent network. As such, we concatenate the last fully-connected layer from the CNN with its counterpart in the LSTM to compose a single feature vector based on these two penultimate layers. Ultimately, this forms a joint spatio-temporal encoding of the cross-covariance matrix. In order to further reduce the dimensionality of the spatio-temporal encodings and cancel background noise effects BIBREF21 , we train an unsupervised deep autoenoder (DAE) on the fused heterogeneous features produced by the combined CNN and LSTM information. The DAE forms our second level of hierarchy, with 3 encoding and 3 decoding layers, and mean squared error (MSE) as the cost function. Classification with Extreme Gradient Boost At the third level of hierarchy, the discrete latent vector representation of the deep autoencoder is fed into an Extreme Gradient Boost based classification layer BIBREF22 , BIBREF23 motivated by BIBREF21 . It is a regularized gradient boosted decision tree that performs well on structured problems. Since our EEG-phonological pairwise classification has an internal structure involving individual phonemes and words, it seems to be a reasonable choice of classifier. The classifier receives its input from the latent vectors of the deep autoencoder and is trained in a supervised manner to output the final predicted classes corresponding to the speech imagery. Dataset We evaluate our model on a publicly available dataset, KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw). The dataset consists of 14 participants, with each prompt presented 11 times to each individual. Since our intention is to classify the phonological categories from human thoughts, we discard the facial and audio information and only consider the EEG data corresponding to imagined speech. It is noteworthy that given the mixed nature of EEG signals, it is reportedly challenging to attain a pairwise EEG-phoneme mapping BIBREF18 . In order to explore the problem space, we thus specifically target five binary classification problems addressed in BIBREF17 , BIBREF18 , i.e presence/absence of consonants, phonemic nasal, bilabial, high-front vowels and high-back vowels. Training and hyperparameter selection We performed two sets of experiments with the single-trial EEG data. In PHASE-ONE, our goals was to identify the best architectures and hyperparameters for our networks with a reasonable number of runs. For PHASE-ONE, we randomly shuffled and divided the data (1913 signals from 14 individuals) into train (80%), development (10%) and test sets (10%). In PHASE-TWO, in order to perform a fair comparison with the previous methods reported on the same dataset, we perform a leave-one-subject out cross-validation experiment using the best settings we learn from PHASE-ONE. The architectural parameters and hyperparameters listed in Table TABREF6 were selected through an exhaustive grid-search based on the validation set of PHASE-ONE. We conducted a series of empirical studies starting from single hidden-layered networks for each of the blocks and, based on the validation accuracy, we increased the depth of each given network and selected the optimal parametric set from all possible combinations of parameters. For the gradient boosting classification, we fixed the maximum depth at 10, number of estimators at 5000, learning rate at 0.1, regularization coefficient at 0.3, subsample ratio at 0.8, and column-sample/iteration at 0.4. We did not find any notable change of accuracy while varying other hyperparameters while training gradient boost classifier. Performance analysis and discussion To demonstrate the significance of the hierarchical CNN-LSTM-DAE method, we conducted separate experiments with the individual networks in PHASE-ONE of experiments and summarized the results in Table TABREF12 From the average accuracy scores, we observe that the mixed network performs much better than individual blocks which is in agreement with the findings in BIBREF21 . A detailed analysis on repeated runs further shows that in most of the cases, LSTM alone does not perform better than chance. CNN, on the other hand, is heavily biased towards the class label which sees more training data corresponding to it. Though the situation improves with combined CNN-LSTM, our analysis clearly shows the necessity of a better encoding scheme to utilize the combined features rather than mere concatenation of the penultimate features of both networks. The very fact that our combined network improves the classification accuracy by a mean margin of 14.45% than the CNN-LSTM network indeed reveals that the autoencoder contributes towards filtering out the unrelated and noisy features from the concatenated penultimate feature set. It also proves that the combined supervised and unsupervised neural networks, trained hierarchically, can learn the discriminative manifold better than the individual networks and it is crucial for improving the classification accuracy. In addition to accuracy, we also provide the kappa coefficients BIBREF24 of our method in Fig. FIGREF14 . Here, a higher mean kappa value corresponding to a task implies that the network is able to find better discriminative information from the EEG data beyond random decisions. The maximum above-chance accuracy (75.92%) is recorded for presence/absence of the vowel task and the minimum (49.14%) is recorded for the INLINEFORM0 . To further investigate the feature representation achieved by our model, we plot T-distributed Stochastic Neighbor Embedding (tSNE) corresponding to INLINEFORM0 and V/C classification tasks in Fig. FIGREF8 . We particularly select these two tasks as our model exhibits respectively minimum and maximum performance for these two. The tSNE visualization reveals that the second set of features are more easily separable than the first one, thereby giving a rationale for our performance. Next, we provide performance comparison of the proposed approach with the baseline methods for PHASE-TWO of our study (cross-validation experiment) in Table TABREF15 . Since the model encounters the unseen data of a new subject for testing, and given the high inter-subject variability of the EEG data, a reduction in the accuracy was expected. However, our network still managed to achieve an improvement of 18.91, 9.95, 67.15, 2.83 and 13.70 % over BIBREF17 . Besides, our best model shows more reliability compared to previous works: The standard deviation of our model's classification accuracy across all the tasks is reduced from 22.59% BIBREF17 and 17.52% BIBREF18 to a mere 5.41%. Conclusion and future direction In an attempt to move a step towards understanding the speech information encoded in brain signals, we developed a novel mixed deep neural network scheme for a number of binary classification tasks from speech imagery EEG data. Unlike previous approaches which mostly deal with subject-dependent classification of EEG into discrete vowel or word labels, this work investigates a subject-invariant mapping of EEG data with different phonological categories, varying widely in terms of underlying articulator motions (eg: involvement or non-involvement of lips and velum, variation of tongue movements etc). Our model takes an advantage of feature extraction capability of CNN, LSTM as well as the deep learning benefit of deep autoencoders. We took BIBREF17 , BIBREF18 as the baseline works investigating the same problem and compared our performance with theirs. Our proposed method highly outperforms the existing methods across all the five binary classification tasks by a large average margin of 22.51%. Acknowledgments This work was funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and Canadian Institutes for Health Research (CIHR). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What data was presented to the subjects to elicit event-related responses? Only give me the answer and do not output any other words.
Answer:
[ "7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)", "KARA ONE BIBREF17 , composed of multimodal data for stimulus-based, imagined and articulated speech state corresponding to 7 phonemic/syllabic ( /iy/, /piy/, /tiy/, /diy/, /uw/, /m/, /n/ ) as well as 4 words(pat, pot, knew and gnaw)" ]
276
3,184
single_doc_qa
7ab850f2f8fd4acaa28bb7c1378237e7
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Introduction} \label{sec:introduction} Probabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation. On the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \cite{gilloire1992adaptive}, noise cancellation \cite{nelson1991active}, and channel equalization \cite{falconer2002frequency}. Although these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \cite{sayed1994state} and then by Haykin \emph{et al.} \cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes. A first attempt to approximate the LMS filter from a probabilistic perspective was presented in \cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm. In this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \cite{cid1994recurrent}, or Bayesian forecasting \cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering. The probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications. Experiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \cite{barber2012bayesian}, to adaptive filtering.\\ \section{Probabilistic Model} Throughout this work, we assume the observation model to be linear-Gaussian with the following distribution, \begin{equation} p(y_k|{\bf w}_k) = \mathcal{N}(y_k;{\bf x}_k^T {\bf w}_k , \sigma_n^2), \label{eq:mess_eq} \end{equation} where $\sigma_n^2$ is the variance of the observation noise, ${\bf x}_k$ is the regression vector and ${\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors. In a non-stationary scenario, ${\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\sigma_d^2$ for this parameter vector: \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;{\bf w}_{k-1}, \sigma_d^2 {\bf I}), \label{eq:trans_eq} \end{equation} where $\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\bf w}_k$ \begin{equation} p({\bf w}_0)= \mathcal{N}({\bf w}_0;0, \sigma_d^2{\bf I}).\nonumber \end{equation} \section{Exact inference in this model: Revisiting the RLS filter} Given the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\bf w}_k|y_{1:k})$. Since all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is \begin{equation} p({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k}, \boldsymbol\Sigma_{k}), \nonumber \end{equation} in which the mean vector ${\bf\boldsymbol\mu}_{k}$ is given by \begin{equation} {\bf\boldsymbol\mu}_k = {\bf\boldsymbol\mu}_{k-1} + {\bf K}_k (y_k - {\bf x}_k^T {\bf\boldsymbol\mu}_{k-1}){\bf x}_k, \nonumber \end{equation} where we have introduced the auxiliary variable \begin{equation} {\bf K}_k = \frac{ \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right)}{{\bf x}_k^T \left(\boldsymbol\Sigma_{k-1} + \sigma_d^2 {\bf I}\right) {\bf x}_k + \sigma_n^2}, \nonumber \end{equation} and the covariance matrix $\boldsymbol\Sigma_k$ is obtained as \begin{equation} \boldsymbol\Sigma_k = \left( {\bf I} - {\bf K}_k{\bf x}_k {\bf x}_k^T \right) ( \boldsymbol\Sigma_{k-1} +\sigma_d^2), \nonumber \end{equation} Note that the mode of $p({\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule \begin{equation} {{\bf w}}_k^{(RLS)} = {{\bf w}}_{k-1}^{(RLS)} + {\bf K}_k (y_k - {\bf x}_k^T {{\bf w}}_{k-1}^{(RLS)}){\bf x}_k . \label{eq:prob_rls} \end{equation} This rule is similar to the one introduced in \cite{haykin1997adaptive}. Finally, note that the covariance matrix $\boldsymbol\Sigma_k$ is a measure of the uncertainty of the estimate ${\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation. \section{Approximating the posterior distribution: LMS filter } The proposed approach consists in approximating the posterior distribution $p({\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \begin{equation} \label{eq:aprox_post} \hat{p}({\bf w}_{k}|y_{1:k})=\mathcal{N}({\bf w}_{k};{\bf \hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_{k}^2 {\bf I} ). \end{equation} In order to estimate the mean and covariance of the approximate distribution $\hat{p}({\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \begin{equation} \{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k\}=\arg \displaystyle{ \min_{\hat{\boldsymbol\mu}_k,\hat{\sigma}_k}} \{ D_{KL}\left(p({\bf w}_{k}|y_{1:k}))\| \hat{p}({\bf w}_{k}|y_{1:k})\right) \}. \nonumber \end{equation} The derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as \begin{equation} {\hat{\boldsymbol\mu}}_{k} = {\boldsymbol\mu}_{k};~~~~~~ \hat{\sigma}_{k}^2 = \frac{{\sf Tr}\{ \boldsymbol\Sigma_k\} }{M}. \label{eq:sigma_hat} \end{equation} We now show that by using \eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\hat{p}({\bf w}_{k-1}|y_{1:k-1}) = \mathcal{N}({\bf w}_{k-1};\hat{\bf\boldsymbol\mu}_{k-1}, \hat{\sigma}_{k-1}^2 {\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution is obtained as % \begin{eqnarray} \hat{p}({\bf w}_k|y_{1:k-1}) &=& \int p({\bf w}_k|{\bf w}_{k-1}) \hat{p}({\bf w}_{k-1}|y_{1:k-1}) d{\bf w}_{k-1} \nonumber\\ &=& \mathcal{N}({\bf w}_k;{\bf\boldsymbol\mu}_{k|k-1}, \boldsymbol\Sigma_{k|k-1}), \label{eq:approx_pred} \end{eqnarray} where the mean vector and covariance matrix are given by \begin{eqnarray} \hat{\bf\boldsymbol\mu}_{k|k-1} &=& \hat{\bf\boldsymbol\mu}_{k-1} \nonumber \\ \hat{\boldsymbol\Sigma}_{k|k-1} &=& (\hat{\sigma}_{k-1}^2 + \sigma_d^2 ){\bf I}\nonumber. \end{eqnarray} From \eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\bf w}_k|y_{1:k})$ with an isotropic Gaussian, \begin{equation} \hat{p}({\bf w}_k|y_{1:k}) = \mathcal{N}({\bf w}_k ; {\hat{\boldsymbol\mu}}_{k}, \hat{\sigma}_k^2 {\bf I} ),\nonumber \end{equation} where \begin{eqnarray} {\hat{\boldsymbol\mu}}_{k} &= & {\hat{\boldsymbol\mu}}_{k-1}+ \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2} (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k \nonumber \\ &=& {\hat{\boldsymbol\mu}}_{k-1}+ \eta_k (y_k - {\bf x}_k^T {\hat{\boldsymbol\mu}}_{k-1}){\bf x}_k . \label{eq:prob_lms} \end{eqnarray} Note that, instead of a gain matrix ${\bf K}_k$ as in Eq.~\eqref{eq:prob_rls}, we now have a scalar gain $\eta_k$ that operates as a variable step size. Finally, to obtain the posterior variance, which is our measure of uncertainty, we apply \eqref{eq:sigma_hat} and the trick ${\sf Tr}\{{\bf x}_k{\bf x}_k^T\}= {\bf x}_k^T{\bf x}_k= \|{\bf x}_k \|^2$, \begin{eqnarray} \hat{\sigma}_k^2 &=& \frac{{\sf Tr}(\boldsymbol\Sigma_k)}{M} \\ &=& \frac{1}{M}{\sf Tr}\left\{ \left( {\bf I} - \eta_k {\bf x}_k {\bf x}_k^T \right) (\hat{\sigma}_{k-1}^2 +\sigma_d^2)\right\} \\ &=& \left(1 - \frac{\eta_k \|{\bf x}_k\|^2}{M}\right)(\hat{\sigma}_{k-1}^2 +\sigma_d^2). \label{eq:sig_k} \end{eqnarray} If MAP estimation is performed, we obtain an adaptable step-size LMS estimation \begin{equation} {\bf w}_{k}^{(LMS)} = {\bf w}_{k-1}^{(LMS)} + \eta_k (y_k - {\bf x}_k^T {\bf w}_{k-1}^{(LMS)}){\bf x}_k, \label{eq:lms} \end{equation} with \begin{equation} \eta_k = \frac{ (\hat{\sigma}_{k-1}^2+ \sigma_d^2) }{(\hat{\sigma}_{k-1}^2+ \sigma_d^2) \|{\bf x}_k\|^2 + \sigma_n^2}.\nonumber \end{equation} At this point, several interesting remarks can be made: \begin{itemize} \item The adaptive rule \eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\boldsymbol\Sigma_k$. \item For a stationary model, we have $\sigma_d^2=0$ in \eqref{eq:prob_lms} and \eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\hat{\sigma}_{k}$, vanish over time $k$. \item Finally, the proposed adaptable step-size LMS has only two parameters, $\sigma_d^2$ and $\sigma_n^2$, (and only one, $\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\sigma_d^2$ and $\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \end{itemize} \section{Experiments} \label{sec:experiments} We evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\|{\bf w}^o\|=1$. Regressors $\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \cite{sayed2008adaptive}, VSS-LMS \cite{shin2004variable}.\footnote{The used parameters for each algorithm are: for RLS $\lambda=1$, $\epsilon^{-1}=0.01$; for LMS $\mu=0.01$; for NLMS $\mu=0.5$; and for VSS-LMS $\mu_{max}=1$, $\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments. In stationary environments, the proposed algorithm has only one parameter, $\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\sf MSD} = {\mathbb E} \| {\bf w}_0 - {\bf w}_k \|^2$), averaged out over $50$ independent simulations, is presented in Fig. \ref{fig:msd_statationary}. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{results_stationary_MSD}} \end{minipage} \caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.} \label{fig:msd_statationary} \end{figure} The performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\sigma^2_d=0$ in \eqref{eq:trans_eq}, both the uncertainty $\hat{\sigma}^2_k$, and the adaptive step size $\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\sigma^2_n$ that is $100$ times smaller than the optimal value. \begin{figure}[htb] \centering \begin{minipage}[b]{\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{fig2_final}} \end{minipage} \caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.} \label{fig_2} \end{figure} \begin{table}[ht] \begin{footnotesize} \setlength{\tabcolsep}{2pt} \def1.5mm{1.5mm} \begin{center} \begin{tabular}{|l@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|c@{\hspace{1.5mm}}|} \hline Method & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\ \hline \hline MSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\ \hline \end{tabular} \end{center} \caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.} \label{tab:table_MSD} \end{footnotesize} \end{table} \newpage In a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \cite{gutierrez2011frequency}. Fig. \ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\hat{\mu}_k\pm2\hat{\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \hbox{Table \ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \section{Conclusions and Opened Extensions} \label{sec:conclusions} {We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:} \begin{itemize} \item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes. \item Similarly, if we substitute the transition model of \eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \begin{equation} p({\bf w}_k|{\bf w}_{k-1})= \mathcal{N}({\bf w}_k;\lambda {\bf w}_{k-1}, \sigma_d^2), \nonumber \label{eq:trans_eq_lambda} \end{equation} a similar algorithm is obtained but with a forgetting factor $\lambda$ multiplying ${\bf w}_{k-1}^{(LMS)}$ in \eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer. \item As in \cite{park2014probabilistic}, the measurement model \eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful. \item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios. \end{itemize} \begin{appendices} \section{KL divergence between a general gaussian distribution and an isotropic gaussian} \label{sec:kl} We want to approximate $p_{{\bf x}_1}(x) = \mathcal{N}({\bf x}; \boldsymbol\mu_1,\boldsymbol\Sigma_1)$ by $p_{{\bf x}_2}({\bf x}) = \mathcal{N}({\bf x}; \boldsymbol\mu_2,\sigma_2^2 {\bf I})$. In order to do so, we have to compute the parameters of $p_{{\bf x}_2}({\bf x})$, $\boldsymbol\mu_2$ and $\sigma_2^2$, that minimize the following Kullback-Leibler divergence, \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) &=&\int_{-\infty}^{\infty} p_{{\bf x}_1}({\bf x}) \ln{\frac{p_{{\bf x}_1}({\bf x})}{p_{{\bf x}_2}({\bf x})}}d{\bf x} \nonumber \\ &= & \frac{1}{2} \{ -M + {\sf Tr}(\sigma_2^{-2} {\bf I}\cdot \boldsymbol\Sigma_1^{-1}) \nonumber \\ & & + (\boldsymbol\mu_2 - \boldsymbol\mu_1 )^T \sigma^{-2}_2{\bf I} (\boldsymbol\mu_2 - \boldsymbol\mu_1 ) \nonumber \\ & & + \ln \frac{{\sigma_2^2}^M}{\det\boldsymbol\Sigma_1} \}. \label{eq:divergence} \end{eqnarray} Using symmetry arguments, we obtain \begin{equation} \boldsymbol\mu_2^{*} =\arg \displaystyle{ \min_{\boldsymbol\mu_2}} \{ D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) \} = \boldsymbol\mu_1. \end{equation} Then, \eqref{eq:divergence} gets simplified into \begin{eqnarray} D_{KL}(p_{{\bf x}_1}\| p_{{\bf x}_2}) = \frac{1}{2}\lbrace { -M + {\sf Tr}(\frac{\boldsymbol\Sigma_1}{\sigma_2^{2}}) + \ln \frac{\sigma_2^{2M}}{\det\boldsymbol\Sigma_1}}\rbrace. \end{eqnarray} The variance $\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as \begin{eqnarray} \sigma_2^{2*} &=& \arg\min_{\sigma_2^2} D_{KL}(P_{x_1}\| P_{x_2}) \nonumber \\ &=& \arg\min_{\sigma_2^2}\{ \sigma_2^{-2}{\sf Tr}\{\boldsymbol\Sigma_1\} + M\ln \sigma_2^{2} \} . \end{eqnarray} Deriving and making it equal zero leads to \begin{equation} \frac{\partial}{\partial \sigma_2^2} \left[ \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{\sigma_2^{2}} + M \ln \sigma_2^{2} \right] = \left. {\frac{M}{\sigma_2^{2}}-\frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{(\sigma_2^{2})^2}}\right|_{\sigma_2^{2}=\sigma_2^{2*}}\left. =0 \right. . \nonumber \end{equation} Finally, since the divergence has a single extremum in $R_+$, \begin{equation} \sigma_2^{2*} = \frac{{\sf Tr}\{\boldsymbol\Sigma_1\}}{M}. \end{equation} \end{appendices} \vfill \clearpage \bibliographystyle{IEEEbib} Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the proposed approach in this research paper? Only give me the answer and do not output any other words.
Answer:
[ "This research paper proposed an approach based on approximating the posterior distribution with an isotropic Gaussian distribution." ]
276
6,395
single_doc_qa
8353f020218d4239bd68d566bf99d87a
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction With the increasing popularity of the Internet, online texts provided by social media platform (e.g. Twitter) and news media sites (e.g. Google news) have become important sources of real-world events. Therefore, it is crucial to automatically extract events from online texts. Due to the high variety of events discussed online and the difficulty in obtaining annotated data for training, traditional template-based or supervised learning approaches for event extraction are no longer applicable in dealing with online texts. Nevertheless, newsworthy events are often discussed by many tweets or online news articles. Therefore, the same event could be mentioned by a high volume of redundant tweets or news articles. This property inspires the research community to devise clustering-based models BIBREF0 , BIBREF1 , BIBREF2 to discover new or previously unidentified events without extracting structured representations. To extract structured representations of events such as who did what, when, where and why, Bayesian approaches have made some progress. Assuming that each document is assigned to a single event, which is modeled as a joint distribution over the named entities, the date and the location of the event, and the event-related keywords, Zhou et al. zhou2014simple proposed an unsupervised Latent Event Model (LEM) for open-domain event extraction. To address the limitation that LEM requires the number of events to be pre-set, Zhou et al. zhou2017event further proposed the Dirichlet Process Event Mixture Model (DPEMM) in which the number of events can be learned automatically from data. However, both LEM and DPEMM have two limitations: (1) they assume that all words in a document are generated from a single event which can be represented by a quadruple INLINEFORM0 entity, location, keyword, date INLINEFORM1 . However, long texts such news articles often describe multiple events which clearly violates this assumption; (2) During the inference process of both approaches, the Gibbs sampler needs to compute the conditional posterior distribution and assigns an event for each document. This is time consuming and takes long time to converge. To deal with these limitations, in this paper, we propose the Adversarial-neural Event Model (AEM) based on adversarial training for open-domain event extraction. The principle idea is to use a generator network to learn the projection function between the document-event distribution and four event related word distributions (entity distribution, location distribution, keyword distribution and date distribution). Instead of providing an analytic approximation, AEM uses a discriminator network to discriminate between the reconstructed documents from latent events and the original input documents. This essentially helps the generator to construct a more realistic document from a random noise drawn from a Dirichlet distribution. Due to the flexibility of neural networks, the generator is capable of learning complicated nonlinear distributions. And the supervision signal provided by the discriminator will help generator to capture the event-related patterns. Furthermore, the discriminator also provides low-dimensional discriminative features which can be used to visualize documents and events. The main contributions of the paper are summarized below: Related Work Our work is related to two lines of research, event extraction and Generative Adversarial Nets. Event Extraction Recently there has been much interest in event extraction from online texts, and approaches could be categorized as domain-specific and open-domain event extraction. Domain-specific event extraction often focuses on the specific types of events (e.g. sports events or city events). Panem et al. panem2014structured devised a novel algorithm to extract attribute-value pairs and mapped them to manually generated schemes for extracting the natural disaster events. Similarly, to extract the city-traffic related event, Anantharam et al. anantharam2015extracting viewed the task as a sequential tagging problem and proposed an approach based on the conditional random fields. Zhang zhang2018event proposed an event extraction approach based on imitation learning, especially on inverse reinforcement learning. Open-domain event extraction aims to extract events without limiting the specific types of events. To analyze individual messages and induce a canonical value for each event, Benson et al. benson2011event proposed an approach based on a structured graphical model. By representing an event with a binary tuple which is constituted by a named entity and a date, Ritter et al. ritter2012open employed some statistic to measure the strength of associations between a named entity and a date. The proposed system relies on a supervised labeler trained on annotated data. In BIBREF1 , Abdelhaq et al. developed a real-time event extraction system called EvenTweet, and each event is represented as a triple constituted by time, location and keywords. To extract more information, Wang el al. wang2015seeft developed a system employing the links in tweets and combing tweets with linked articles to identify events. Xia el al. xia2015new combined texts with the location information to detect the events with low spatial and temporal deviations. Zhou et al. zhou2014simple,zhou2017event represented event as a quadruple and proposed two Bayesian models to extract events from tweets. Generative Adversarial Nets As a neural-based generative model, Generative Adversarial Nets BIBREF3 have been extensively researched in natural language processing (NLP) community. For text generation, the sequence generative adversarial network (SeqGAN) proposed in BIBREF4 incorporated a policy gradient strategy to optimize the generation process. Based on the policy gradient, Lin et al. lin2017adversarial proposed RankGAN to capture the rich structures of language by ranking and analyzing a collection of human-written and machine-written sentences. To overcome mode collapse when dealing with discrete data, Fedus et al. fedus2018maskgan proposed MaskGAN which used an actor-critic conditional GAN to fill in missing text conditioned on the surrounding context. Along this line, Wang et al. wang2018sentigan proposed SentiGAN to generate texts of different sentiment labels. Besides, Li et al. li2018learning improved the performance of semi-supervised text classification using adversarial training, BIBREF5 , BIBREF6 designed GAN-based models for distance supervision relation extraction. Although various GAN based approaches have been explored for many applications, none of these approaches tackles open-domain event extraction from online texts. We propose a novel GAN-based event extraction model called AEM. Compared with the previous models, AEM has the following differences: (1) Unlike most GAN-based text generation approaches, a generator network is employed in AEM to learn the projection function between an event distribution and the event-related word distributions (entity, location, keyword, date). The learned generator captures event-related patterns rather than generating text sequence; (2) Different from LEM and DPEMM, AEM uses a generator network to capture the event-related patterns and is able to mine events from different text sources (short and long). Moreover, unlike traditional inference procedure, such as Gibbs sampling used in LEM and DPEMM, AEM could extract the events more efficiently due to the CUDA acceleration; (3) The discriminative features learned by the discriminator of AEM provide a straightforward way to visualize the extracted events. Methodology We describe Adversarial-neural Event Model (AEM) in this section. An event is represented as a quadruple < INLINEFORM0 >, where INLINEFORM1 stands for non-location named entities, INLINEFORM2 for a location, INLINEFORM3 for event-related keywords, INLINEFORM4 for a date, and each component in the tuple is represented by component-specific representative words. AEM is constituted by three components: (1) The document representation module, as shown at the top of Figure FIGREF4 , defines a document representation approach which converts an input document from the online text corpus into INLINEFORM0 which captures the key event elements; (2) The generator INLINEFORM1 , as shown in the lower-left part of Figure FIGREF4 , generates a fake document INLINEFORM2 which is constituted by four multinomial distributions using an event distribution INLINEFORM3 drawn from a Dirichlet distribution as input; (3) The discriminator INLINEFORM4 , as shown in the lower-right part of Figure FIGREF4 , distinguishes the real documents from the fake ones and its output is subsequently employed as a learning signal to update the INLINEFORM5 and INLINEFORM6 . The details of each component are presented below. Document Representation Each document INLINEFORM0 in a given corpus INLINEFORM1 is represented as a concatenation of 4 multinomial distributions which are entity distribution ( INLINEFORM2 ), location distribution ( INLINEFORM3 ), keyword distribution ( INLINEFORM4 ) and date distribution ( INLINEFORM5 ) of the document. As four distributions are calculated in a similar way, we only describe the computation of the entity distribution below as an example. The entity distribution INLINEFORM0 is represented by a normalized INLINEFORM1 -dimensional vector weighted by TF-IDF, and it is calculated as: INLINEFORM2 where INLINEFORM0 is the pseudo corpus constructed by removing all non-entity words from INLINEFORM1 , INLINEFORM2 is the total number of distinct entities in a corpus, INLINEFORM3 denotes the number of INLINEFORM4 -th entity appeared in document INLINEFORM5 , INLINEFORM6 represents the number of documents in the corpus, and INLINEFORM7 is the number of documents that contain INLINEFORM8 -th entity, and the obtained INLINEFORM9 denotes the relevance between INLINEFORM10 -th entity and document INLINEFORM11 . Similarly, location distribution INLINEFORM0 , keyword distribution INLINEFORM1 and date distribution INLINEFORM2 of INLINEFORM3 could be calculated in the same way, and the dimensions of these distributions are denoted as INLINEFORM4 , INLINEFORM5 and INLINEFORM6 , respectively. Finally, each document INLINEFORM7 in the corpus is represented by a INLINEFORM8 -dimensional ( INLINEFORM9 = INLINEFORM10 + INLINEFORM11 + INLINEFORM12 + INLINEFORM13 ) vector INLINEFORM14 by concatenating four computed distributions. Network Architecture The generator network INLINEFORM0 is designed to learn the projection function between the document-event distribution INLINEFORM1 and the four document-level word distributions (entity distribution, location distribution, keyword distribution and date distribution). More concretely, INLINEFORM0 consists of a INLINEFORM1 -dimensional document-event distribution layer, INLINEFORM2 -dimensional hidden layer and INLINEFORM3 -dimensional event-related word distribution layer. Here, INLINEFORM4 denotes the event number, INLINEFORM5 is the number of units in the hidden layer, INLINEFORM6 is the vocabulary size and equals to INLINEFORM7 + INLINEFORM8 + INLINEFORM9 + INLINEFORM10 . As shown in Figure FIGREF4 , INLINEFORM11 firstly employs a random document-event distribution INLINEFORM12 as an input. To model the multinomial property of the document-event distribution, INLINEFORM13 is drawn from a Dirichlet distribution parameterized with INLINEFORM14 which is formulated as: DISPLAYFORM0 where INLINEFORM0 is the hyper-parameter of the dirichlet distribution, INLINEFORM1 is the number of events which should be set in AEM, INLINEFORM2 , INLINEFORM3 represents the proportion of event INLINEFORM4 in the document and INLINEFORM5 . Subsequently, INLINEFORM0 transforms INLINEFORM1 into a INLINEFORM2 -dimensional hidden space using a linear layer followed by layer normalization, and the transformation is defined as: DISPLAYFORM0 where INLINEFORM0 represents the weight matrix of hidden layer, and INLINEFORM1 denotes the bias term, INLINEFORM2 is the parameter of LeakyReLU activation and is set to 0.1, INLINEFORM3 and INLINEFORM4 denote the normalized hidden states and the outputs of the hidden layer, and INLINEFORM5 represents the layer normalization. Then, to project INLINEFORM0 into four document-level event related word distributions ( INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 shown in Figure FIGREF4 ), four subnets (each contains a linear layer, a batch normalization layer and a softmax layer) are employed in INLINEFORM5 . And the exact transformation is based on the formulas below: DISPLAYFORM0 where INLINEFORM0 means softmax layer, INLINEFORM1 , INLINEFORM2 , INLINEFORM3 and INLINEFORM4 denote the weight matrices of the linear layers in subnets, INLINEFORM5 , INLINEFORM6 , INLINEFORM7 and INLINEFORM8 represent the corresponding bias terms, INLINEFORM9 , INLINEFORM10 , INLINEFORM11 and INLINEFORM12 are state vectors. INLINEFORM13 , INLINEFORM14 , INLINEFORM15 and INLINEFORM16 denote the generated entity distribution, location distribution, keyword distribution and date distribution, respectively, that correspond to the given event distribution INLINEFORM17 . And each dimension represents the relevance between corresponding entity/location/keyword/date term and the input event distribution. Finally, four generated distributions are concatenated to represent the generated document INLINEFORM0 corresponding to the input INLINEFORM1 : DISPLAYFORM0 The discriminator network INLINEFORM0 is designed as a fully-connected network which contains an input layer, a discriminative feature layer (discriminative features are employed for event visualization) and an output layer. In AEM, INLINEFORM1 uses fake document INLINEFORM2 and real document INLINEFORM3 as input and outputs the signal INLINEFORM4 to indicate the source of the input data (lower value denotes that INLINEFORM5 is prone to predict the input data as a fake document and vice versa). As have previously been discussed in BIBREF7 , BIBREF8 , lipschitz continuity of INLINEFORM0 network is crucial to the training of the GAN-based approaches. To ensure the lipschitz continuity of INLINEFORM1 , we employ the spectral normalization technique BIBREF9 . More concretely, for each linear layer INLINEFORM2 (bias term is omitted for simplicity) in INLINEFORM3 , the weight matrix INLINEFORM4 is normalized by INLINEFORM5 . Here, INLINEFORM6 is the spectral norm of the weight matrix INLINEFORM7 with the definition below: DISPLAYFORM0 which is equivalent to the largest singular value of INLINEFORM0 . The weight matrix INLINEFORM1 is then normalized using: DISPLAYFORM0 Obviously, the normalized weight matrix INLINEFORM0 satisfies that INLINEFORM1 and further ensures the lipschitz continuity of the INLINEFORM2 network BIBREF9 . To reduce the high cost of computing spectral norm INLINEFORM3 using singular value decomposition at each iteration, we follow BIBREF10 and employ the power iteration method to estimate INLINEFORM4 instead. With this substitution, the spectral norm can be estimated with very small additional computational time. Objective and Training Procedure The real document INLINEFORM0 and fake document INLINEFORM1 shown in Figure FIGREF4 could be viewed as random samples from two distributions INLINEFORM2 and INLINEFORM3 , and each of them is a joint distribution constituted by four Dirichlet distributions (corresponding to entity distribution, location distribution, keyword distribution and date distribution). The training objective of AEM is to let the distribution INLINEFORM4 (produced by INLINEFORM5 network) to approximate the real data distribution INLINEFORM6 as much as possible. To compare the different GAN losses, Kurach kurach2018gan takes a sober view of the current state of GAN and suggests that the Jansen-Shannon divergence used in BIBREF3 performs more stable than variant objectives. Besides, Kurach also advocates that the gradient penalty (GP) regularization devised in BIBREF8 will further improve the stability of the model. Thus, the objective function of the proposed AEM is defined as: DISPLAYFORM0 where INLINEFORM0 denotes the discriminator loss, INLINEFORM1 represents the gradient penalty regularization loss, INLINEFORM2 is the gradient penalty coefficient which trade-off the two components of objective, INLINEFORM3 could be obtained by sampling uniformly along a straight line between INLINEFORM4 and INLINEFORM5 , INLINEFORM6 denotes the corresponding distribution. The training procedure of AEM is presented in Algorithm SECREF15 , where INLINEFORM0 is the event number, INLINEFORM1 denotes the number of discriminator iterations per generator iteration, INLINEFORM2 is the batch size, INLINEFORM3 represents the learning rate, INLINEFORM4 and INLINEFORM5 are hyper-parameters of Adam BIBREF11 , INLINEFORM6 denotes INLINEFORM7 . In this paper, we set INLINEFORM8 , INLINEFORM9 , INLINEFORM10 . Moreover, INLINEFORM11 , INLINEFORM12 and INLINEFORM13 are set as 0.0002, 0.5 and 0.999. [!h] Training procedure for AEM [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 the trained INLINEFORM7 and INLINEFORM8 . Initial INLINEFORM9 parameters INLINEFORM10 and INLINEFORM11 parameter INLINEFORM12 INLINEFORM13 has not converged INLINEFORM14 INLINEFORM15 Sample INLINEFORM16 , Sample a random INLINEFORM17 Sample a random number INLINEFORM18 INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 INLINEFORM23 INLINEFORM24 Sample INLINEFORM25 noise INLINEFORM26 INLINEFORM27 Event Generation After the model training, the generator INLINEFORM0 learns the mapping function between the document-event distribution and the document-level event-related word distributions (entity, location, keyword and date). In other words, with an event distribution INLINEFORM1 as input, INLINEFORM2 could generate the corresponding entity distribution, location distribution, keyword distribution and date distribution. In AEM, we employ event seed INLINEFORM0 , an INLINEFORM1 -dimensional vector with one-hot encoding, to generate the event related word distributions. For example, in ten event setting, INLINEFORM2 represents the event seed of the first event. With the event seed INLINEFORM3 as input, the corresponding distributions could be generated by INLINEFORM4 based on the equation below: DISPLAYFORM0 where INLINEFORM0 , INLINEFORM1 , INLINEFORM2 and INLINEFORM3 denote the entity distribution, location distribution, keyword distribution and date distribution of the first event respectively. Experiments In this section, we firstly describe the datasets and baseline approaches used in our experiments and then present the experimental results. Experimental Setup To validate the effectiveness of AEM for extracting events from social media (e.g. Twitter) and news media sites (e.g. Google news), three datasets (FSD BIBREF12 , Twitter, and Google datasets) are employed. Details are summarized below: FSD dataset (social media) is the first story detection dataset containing 2,499 tweets. We filter out events mentioned in less than 15 tweets since events mentioned in very few tweets are less likely to be significant. The final dataset contains 2,453 tweets annotated with 20 events. Twitter dataset (social media) is collected from tweets published in the month of December in 2010 using Twitter streaming API. It contains 1,000 tweets annotated with 20 events. Google dataset (news article) is a subset of GDELT Event Database INLINEFORM0 , documents are retrieved by event related words. For example, documents which contain `malaysia', `airline', `search' and `plane' are retrieved for event MH370. By combining 30 events related documents, the dataset contains 11,909 news articles. We choose the following three models as the baselines: K-means is a well known data clustering algorithm, we implement the algorithm using sklearn toolbox, and represent documents using bag-of-words weighted by TF-IDF. LEM BIBREF13 is a Bayesian modeling approach for open-domain event extraction. It treats an event as a latent variable and models the generation of an event as a joint distribution of its individual event elements. We implement the algorithm with the default configuration. DPEMM BIBREF14 is a non-parametric mixture model for event extraction. It addresses the limitation of LEM that the number of events should be known beforehand. We implement the model with the default configuration. For social media text corpus (FSD and Twitter), a named entity tagger specifically built for Twitter is used to extract named entities including locations from tweets. A Twitter Part-of-Speech (POS) tagger BIBREF15 is used for POS tagging and only words tagged with nouns, verbs and adjectives are retained as keywords. For the Google dataset, we use the Stanford Named Entity Recognizer to identify the named entities (organization, location and person). Due to the `date' information not being provided in the Google dataset, we further divide the non-location named entities into two categories (`person' and `organization') and employ a quadruple <organization, location, person, keyword> to denote an event in news articles. We also remove common stopwords and only keep the recognized named entities and the tokens which are verbs, nouns or adjectives. Experimental Results To evaluate the performance of the proposed approach, we use the evaluation metrics such as precision, recall and F-measure. Precision is defined as the proportion of the correctly identified events out of the model generated events. Recall is defined as the proportion of correctly identified true events. For calculating the precision of the 4-tuple, we use following criteria: (1) Do the entity/organization, location, date/person and keyword that we have extracted refer to the same event? (2) If the extracted representation contains keywords, are they informative enough to tell us what happened? Table TABREF35 shows the event extraction results on the three datasets. The statistics are obtained with the default parameter setting that INLINEFORM0 is set to 5, number of hidden units INLINEFORM1 is set to 200, and INLINEFORM2 contains three fully-connected layers. The event number INLINEFORM3 for three datasets are set to 25, 25 and 35, respectively. The examples of extracted events are shown in Table. TABREF36 . It can be observed that K-means performs the worst over all three datasets. On the social media datasets, AEM outpoerforms both LEM and DPEMM by 6.5% and 1.7% respectively in F-measure on the FSD dataset, and 4.4% and 3.7% in F-measure on the Twitter dataset. We can also observe that apart from K-means, all the approaches perform worse on the Twitter dataset compared to FSD, possibly due to the limited size of the Twitter dataset. Moreover, on the Google dataset, the proposed AEM performs significantly better than LEM and DPEMM. It improves upon LEM by 15.5% and upon DPEMM by more than 30% in F-measure. This is because: (1) the assumption made by LEM and DPEMM that all words in a document are generated from a single event is not suitable for long text such as news articles; (2) DPEMM generates too many irrelevant events which leads to a very low precision score. Overall, we see the superior performance of AEM across all datasets, with more significant improvement on the for Google datasets (long text). We next visualize the detected events based on the discriminative features learned by the trained INLINEFORM0 network in AEM. The t-SNE BIBREF16 visualization results on the datasets are shown in Figure FIGREF19 . For clarity, each subplot is plotted on a subset of the dataset containing ten randomly selected events. It can be observed that documents describing the same event have been grouped into the same cluster. To further evaluate if a variation of the parameters INLINEFORM0 (the number of discriminator iterations per generator iteration), INLINEFORM1 (the number of units in hidden layer) and the structure of generator INLINEFORM2 will impact the extraction performance, additional experiments have been conducted on the Google dataset, with INLINEFORM3 set to 5, 7 and 10, INLINEFORM4 set to 100, 150 and 200, and three INLINEFORM5 structures (3, 4 and 5 layers). The comparison results on precision, recall and F-measure are shown in Figure FIGREF20 . From the results, it could be observed that AEM with the 5-layer generator performs the best and achieves 96.7% in F-measure, and the worst F-measure obtained by AEM is 85.7%. Overall, the AEM outperforms all compared approaches acorss various parameter settings, showing relatively stable performance. Finally, we compare in Figure FIGREF37 the training time required for each model, excluding the constant time required by each model to load the data. We could observe that K-means runs fastest among all four approaches. Both LEM and DPEMM need to sample the event allocation for each document and update the relevant counts during Gibbs sampling which are time consuming. AEM only requires a fraction of the training time compared to LEM and DPEMM. Moreover, on a larger dataset such as the Google dataset, AEM appears to be far more efficient compared to LEM and DPEMM. Conclusions and Future Work In this paper, we have proposed a novel approach based on adversarial training to extract the structured representation of events from online text. The experimental comparison with the state-of-the-art methods shows that AEM achieves improved extraction performance, especially on long text corpora with an improvement of 15% observed in F-measure. AEM only requires a fraction of training time compared to existing Bayesian graphical modeling approaches. In future work, we will explore incorporating external knowledge (e.g. word relatedness contained in word embeddings) into the learning framework for event extraction. Besides, exploring nonparametric neural event extraction approaches and detecting the evolution of events over time from news articles are other promising future directions. Acknowledgments We would like to thank anonymous reviewers for their valuable comments and helpful suggestions. This work was funded by the National Key Research and Development Program of China (2016YFC1306704), the National Natural Science Foundation of China (61772132), the Natural Science Foundation of Jiangsu Province of China (BK20161430). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Do they report results only on English data? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "Unanswerable" ]
276
5,336
single_doc_qa
3b2226bdd77b43e09037685cf3e77e86
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Paper Info Title: Crossed Nonlinear Dynamical Hall Effect in Twisted Bilayers Publish Date: 17 Mar 2023 Author List: Figure FIG. 1.(a) Schematics of experimental setup.(b, c) Valence band structure and intrinsic Hall conductivity with respect to in-plane input for tMoTe2 at twist angles (b) θ = 1.2 • and (c) θ = 2 • in +K valley.Color coding in (b) and (c) denotes the layer composition σ z n (k). FIG. 2. (a) The interlayer BCP G, and (b) its vorticity [∂ k × G]z on the first valence band from +K valley of 1.2 • tMoTe2.Background color and arrows in (a) denote the magnitude and vector flow, respectively.Grey curves in (b) show energy contours at 1/2 and 3/4 of the band width.The black dashed arrow denotes direction of increasing hole doping level.Black dashed hexagons in (a, b) denote the boundary of moiré Brillouin zone (mBZ). FIG. 3. (a-c) Three high-symmetry stacking registries for tBG with a commensurate twist angle θ = 21.8 • .Lattice geometries with rotation center on an overlapping atomic site (a, b) and hexagonal center (c).(d) Schematic of the moiré pattern when the twist angle slightly deviates from 21.8 • , here θ = 21 • .Red squares marked by A, B and C are the local regions that resemble commensurate 21.8 • patterns in (a), (b) and (c), respectively.(e, f) Low-energy band structures and intrinsic Hall conductivity of the two geometries [(a) and (b) are equivalent].The shaded areas highlight energy windows ∼ ω around band degeneracies where interband transitions, not considered here, may quantitatively affect the conductivity measured. FIG. S4.Band structure and layer composition σ z n in +K valley of tBG (left panel) and the intrinsic Hall conductivity (right panel) at three different twist angle θ.The shaded areas highlight energy windows ∼ ω around band degeneracies in which the conductivity results should not be considered.Here σH should be multiplied by a factor of 2 accounting for spin degeneracy. abstract We propose an unconventional nonlinear dynamical Hall effect characteristic of twisted bilayers. The joint action of in-plane and out-of-plane ac electric fields generates Hall currents j ∼ Ė⊥ × E in both sum and difference frequencies, and when the two orthogonal fields have common frequency their phase difference controls the on/off, direction and magnitude of the rectified dc Hall current. This novel intrinsic Hall response has a band geometric origin in the momentum space curl of interlayer Berry connection polarizability, arising from layer hybridization of electrons by the twisted interlayer coupling. The effect allows a unique rectification functionality and a transport probe of chiral symmetry in bilayer systems. We show sizable effects in twisted homobilayer transition metal dichalcogenides and twisted bilayer graphene over broad range of twist angles. Nonlinear Hall-type response to an in-plane electric field in a two dimensional (2D) system with time reversal symmetry has attracted marked interests . Intensive studies have been devoted to uncovering new types of nonlinear Hall transport induced by quantum geometry and their applications such as terahertz rectification and magnetic information readout . Restricted by symmetry , the known mechanisms of nonlinear Hall response in quasi-2D nonmagnetic materials are all of extrinsic nature, sensitive to fine details of disorders , which have limited their utilization for practical applications. Moreover, having a single driving field only, the effect has not unleashed the full potential of nonlinearity for enabling controlled gate in logic operation, where separable inputs (i.e., in orthogonal directions) are desirable. The latter, in the context of Hall effect, calls for control by both out-of-plane and in-plane electric fields. A strategy to introduce quantum geometric response to out-of-plane field in quasi-2D geometry is made possible in van der Waals (vdW) layered structures with twisted stacking . Taking homobilayer as an example, electrons have an active layer degree of freedom that is associated with an out-of-plane electric dipole , whereas interlayer quantum tunneling rotates this pseudospin about in-plane axes that are of topologically nontrivial textures in the twisted landscapes . Such layer pseudospin structures can underlie novel quantum geometric properties when coupled with out-ofplane field. Recent studies have found layer circular photogalvanic effect and layer-contrasted time-reversaleven Hall effect , arising from band geometric quantities. In this work we unveil a new type of nonlinear Hall effect in time-reversal symmetric twisted bilayers, where an intrinsic Hall current emerges under the combined action of an in-plane electric field E and an out-of-plane ac field E ⊥ (t): j ∼ Ė⊥ × E [see Fig. ]. Having the two driving fields (inputs) and the current response (output) all orthogonal to each other, the effect is dubbed as the crossed nonlinear dynamical Hall effect. This is also the first nonlinear Hall contribution of an intrinsic nature in nonmagnetic materials without external magnetic field, determined solely by the band structures, not relying on extrinsic factors such as disorders and relaxation times. The effect arises from the interlayer hybridization of electronic states under the chiral crystal symmetry characteristic of twisted bilayers, and has a novel band geometric origin in the momentum space curl of interlayer Berry connection polarizability (BCP). Having two driving fields of the same frequency, a dc Hall current develops, whose on/off, direction and magnitude can all be controlled by the phase difference of the two fields, which does not affect the magnitude of the double-frequency component. Such a characteristic tunability renders this effect a unique approach to rectification and transport probe of chiral bilayers. As examples, we show sizable effects in small angle twisted transition metal dichalcogenides (tTMDs) and twisted bilayer graphene (tBG), as well as tBG of large angles where Umklapp interlayer tunneling dominates. Geometric origin of the effect. A bilayer system couples to in-plane and out-of-plane driving electric fields in completely different ways. The in-plane field couples to the 2D crystal momentum, leading to Berry-phase effects in the 2D momentum space . In comparison, the outof-plane field is coupled to the interlayer dipole moment p in the form of −E ⊥ p, where p = ed 0 σz with σz as the Pauli matrix in the layer index subspace and d 0 the interlayer distance. When the system has a more than twofold rotational axis in the z direction, as in tBG and tTMDs, any in-plane current driven by the out-of-plane field alone is forbidden. It also prohibits the off-diagonal components of the symmetric part of the conductivity tensor σ ab = ∂j a /∂E ||,b with respect to the in-plane input and output. Since the antisymmetric part of σ ab is not allowed by the Onsager reciprocity in nonmagnetic systems, all the off-diagonal components of σ ab is forbidden, irrespective of the order of out-of-plane field. On the other hand, as we will show, an in-plane Hall conductivity σ xy = −σ yx can still be driven by the product of an in-plane field and the time variation rate of an outof-plane ac field, which is a characteristic effect of chiral bilayers. To account for the effect, we make use of the semiclassical theory . The velocity of an electron in a bilayer system is given by where k is the 2D crystal momentum. Here and hereafter we suppress the band index for simplicity, unless otherwise noted. The three contributions in this equation come from the band velocity, the anomalous velocities induced by the k -space Berry curvature Ω k and by the hybrid Berry curvature Ω kE ⊥ in the (k, E ⊥ ) space. For the velocity at the order of interest, the k-space Berry curvature is corrected to the first order of the variation rate of out-of-plane field Ė⊥ as Here A = u k |i∂ k |u k is the unperturbed k-space Berry connection, with |u k being the cell-periodic part of the Bloch wave, whereas is its gauge invariant correction , which can be identified physically as an in-plane positional shift of an electron induced by the time evolution of the out-of-plane field. For a band with index n, we have whose numerator involves the interband matrix elements of the interlayer dipole and velocity operators, and ε n is the unperturbed band energy. Meanwhile, up to the first order of in-plane field, the hybrid Berry curvature reads Here A E || is the k-space Berry connection induced by E || field , which represents an intralayer positional shift and whose detailed expression is not needed for our purpose. and is its first order correction induced by the in-plane field. In addition, ε = ε + δε, where δε = eE • G Ė⊥ is the field-induced electron energy . Given that A E || is the E ⊥ -space counterpart of intralayer shift A E || , and that E ⊥ is conjugate to the interlayer dipole moment, we can pictorially interpret A E || as the interlayer shift induced by in-plane field. It indeed has the desired property of flipping sign under the horizontal mirror-plane reflection, hence is analogous to the so-called interlayer coordinate shift introduced in the study of layer circular photogalvanic effect , which is nothing but the E ⊥ -space counterpart of the shift vector well known in the nonlinear optical phenomenon of shift current. Therefore, the E ⊥ -space BCP eG/ can be understood as the interlayer BCP. This picture is further augmented by the connotation that the interlayer BCP is featured exclusively by interlayer-hybridized electronic states: According to Eq. ( ), if the state |u n is fully polarized in a specific layer around some momentum k, then G (k) is suppressed. With the velocity of individual electrons, the charge current density contributed by the electron system can be obtained from where [dk] is shorthand for n d 2 k/(2π) 2 , and the distribution function is taken to be the Fermi function f 0 as we focus on the intrinsic response. The band geometric contributions to ṙ lead to a Hall current where is intrinsic to the band structure. This band geometric quantity measures the k-space curl of the interlayer BCP over the occupied states, and hence is also a characteristic of layer-hybridized electronic states. Via an integration by parts, it becomes clear that χ int is a Fermi surface property. Since χ int is a time-reversal even pseudoscalar, it is invariant under rotation, but flips sign under space inversion, mirror reflection and rotoreflection symmetries. As such, χ int is allowed if and only if the system possesses a chiral crystal structure, which is the very case of twisted bilayers . Moreover, since twisted structures with opposite twist angles are mirror images of each other, whereas the mirror reflection flips the sign of χ int , the direction of Hall current can be reversed by reversing twist direction. Hall rectification and frequency doubling. This effect can be utilized for the rectification and frequency doubling of an in-plane ac input E = E 0 cos ωt, provided that the out-of-plane field has the same frequency, namely E ⊥ = E 0 ⊥ cos (ωt + ϕ). The phase difference ϕ between the two fields plays an important role in determining the Hall current, which takes the form of j = j 0 sin ϕ + j 2ω sin(2ωt + ϕ). ( Here ω is required to be below the threshold for direct interband transition in order to validate the semiclassical treatment, and σ H has the dimension of conductance and quantifies the Hall response with respect to the in-plane input. In experiment, the Hall output by the crossed nonlinear dynamic Hall effect can be distinguished readily from the conventional nonlinear Hall effect driven by in-plane field alone, as they are odd and even, respectively, in the inplane field. One notes that while the double-frequency component appears for any ϕ, the rectified output is allowed only if the two crossed driving fields are not in-phase or antiphase. Its on/off, chirality (right or left), and magnitude are all controlled by the phase difference of the two fields. Such a unique tunability provides not only a prominent experimental hallmark of this effect, but also a controllable route to Hall rectification. In addition, reversing the direction of the out-of-plane field switches that of the Hall current, which also serves as a control knob. Application to tTMDs. We now study the effect quantitatively in tTMDs, using tMoTe 2 as an example (see details of the continuum model in ). For illustrative purposes, we take ω/2π = 0.1 THz and E 0 ⊥ d 0 = 10 mV in what follows. Figures ) and (c) present the electronic band structures along with the layer composition σ z n (k) at twist angles θ = 1.2 • and θ = 2 • . In both cases, the energy spectra exhibit isolated narrow bands with strong layer hybridization. At θ = 1.2 • , the conductivity shows two peaks ∼ 0.1e 2 /h at low energies associated with the first two valence bands. The third band does not host any sizable conductivity signal. At higher hole-doping levels, a remarkable conductivity peak ∼ e 2 /h appears near the gap separating the fourth and fifth bands. At θ = 2 • , the conductivity shows smaller values, but the overall trends are similar: A peak ∼ O(0.01)e 2 /h appears at low energies, while larger responses ∼ O(0.1)e 2 /h can be spotted as the Fermi level decreases. One can understand the behaviors of σ H from the interlayer BCP in Eq. ( ). It favors band near-degeneracy regions in k -space made up of strongly layer hybridized electronic states. As such, the conductivity is most pro- nounced when the Fermi level is located around such regions, which directly accounts for the peaks of response in Fig. that [∂ k × G] z is negligible at lower energies, and it is dominated by positive values as the doping increases, thus the conductivity rises initially. When the doping level is higher, regions with [∂ k × G] z < 0 start to contribute, thus the conductivity decreases after reaching a maximum. Application to tBG. The second example is tBG. We focus on commensurate twist angles in the large angle limit in the main text , which possess moiré-lattice assisted strong interlayer tunneling via Umklapp processes . This case is appealing because the Umklapp interlayer tunneling is a manifestation of discrete translational symmetry of moiré superlattice, which is irrelevant at small twist angles and not captured by the continuum model but plays important roles in physical contexts such as higher order topological insulator and moiré excitons . The Umklapp tunneling is strongest for the commensurate twist angles of θ = 21.8 • and θ = 38.2 • , whose corresponding periodic moiré superlattices have the smallest lattice constant ( √ 7 of the monolayer counterpart). Such a small moiré scale implies that the exact crystalline symmetry, which depends sensitively on fine details of rotation center, has critical influence on lowenergy response properties. To capture the Umklapp tunneling, we employ the tight-binding model . Figures ) and (c) show two distinct commensurate structures of tBG at θ = 21.8 • belonging to chiral point groups D 3 and D 6 , respectively. The atomic configurations in Figs. ) are equivalent, which are constructed by twisting AA-stacked bilayer graphene around an overlapping atom site, and that in Fig. ) is obtained by rotating around a hexagonal center. Band structures of these two configurations are drastically different within a low-energy window of ∼ 10 meV around the κ point . Remarkably, despite large θ, we still get σ H ∼ O(0.001) e 2 /h (D 3 ) and ∼ O(0.1) e 2 /h (D 6 ), which are comparable to those at small angles (cf. Fig. in the Supplemental Material ). Such sizable responses can be attributed to the strong interlayer coupling enabled by Umklapp processes . Apart from different intensities, the Hall conductivities in the two stacking configurations have distinct energy dependence: In Fig. , σ H shows a single peak centered at zero energy; In Fig. (f), it exhibits two antisymmetric peaks around zero. The peaks are centered around band degeneracies, and their profiles can be understood from the distribution of [∂ k × G] z . Figure (d) illustrates the atomic structure of tBG with a twist angle slightly deviating from θ = 21.8 • , forming a supermoiré pattern. In short range, the local stacking geometries resemble the commensurate configurations at θ = 21.8 • , while the stacking registries at different locales differ by a translation. Similar to the moiré landscapes in the small-angle limit, there also exist high-symmetry locales: Regions A and B enclose the D 3 structure, and region C contains the D 6 configuration. Position-dependent Hall response is therefore expected in such a supermoiré. As the intrinsic Hall signal from the D 6 configuration dominates [see Figs. 3(e) vs (f)], the net response mimics that in Fig. . Discussion. We have uncovered the crossed nonlinear dynamical intrinsic Hall effect characteristic of layer hybridized electronic states in twisted bilayers, and elucidated its geometric origin in the k -space curl of interlayer BCP. It offers a new tool for rectification and frequency doubling in chiral vdW bilayers, and is sizable in tTMD and tBG. Here our focus is on the intrinsic effect, which can be evaluated quantitatively for each material and provides a benchmark for experiments. There may also be extrinsic contributions, similar to the side jump and skew scattering ones in anomalous Hall effect. They typically have distinct scaling behavior with the relaxation time τ from the intrinsic effect, hence can be distinguished from the latter in experiments . Moreover, they are suppressed in the clean limit ωτ 1 [(ωτ ) 2 1, more precisely] . In high-quality tBG samples, τ ∼ ps at room temperature . Much longer τ can be obtained at lower temperatures. In fact, a recent theory explaining well the resistivity of tBG predicted τ ∼ 10 −8 s at 10 K . As such, high-quality tBG under low temperatures and sub-terahertz input (ω/2π = 0.1 THz) is located in the clean limit, rendering an ideal platform for isolating the intrinsic effect. This work paves a new route to driving in-plane response by out-of-plane dynamical control of layered vdW structures . The study can be generalized to other observables such as spin current and spin polarization, and the in-plane driving can be statistical forces, like temperature gradient. Such orthogonal controls rely critically on the nonconservation of layer pseudospin degree of freedom endowed by interlayer coupling, and constitute an emerging research field at the crossing of 2D vdW materials, layertronics, twistronics and nonlinear electronics. This work is supported by the Research Grant Council of Hong Kong (AoE/P-701/20, HKU SRFS2122-7S05), and the Croucher Foundation. W.Y. also acknowledges support by Tencent Foundation. Cong Chen, 1, 2, * Dawei Zhai, 1, 2, * Cong Xiao, 1, 2, † and Wang Yao 1, 2, ‡ 1 Department of Physics, The University of Hong Kong, Hong Kong, China 2 HKU-UCAS Joint Institute of Theoretical and Computational Physics at Hong Kong, China Extra figures for tBG at small twist angles Figure (a) shows the band structure of tBG with θ = 1.47 • obtained from the continuum model . The central bands are well separated from higher ones, and show Dirac points at κ/κ points protected by valley U (1) symmetry and a composite operation of twofold rotation and time reversal C 2z T . Degeneracies at higher energies can also be identified, for example, around ±75 meV at the γ point. As the two Dirac cones from the two layers intersect around the same area, such degeneracies are usually accompanied by strong layer hybridization [see the color in the left panel of Fig. ]. Additionally, it is well-known that the two layers are strongly coupled when θ is around the magic angle (∼ 1.08 • ), rendering narrow bandwidths for the central bands. As discussed in the main text, coexistence of strong interlayer hybridization and small energy separations is expected to contribute sharp conductivity peaks near band degeneracies, as shown in Fig. . In this case, the conductivity peak near the Dirac point can reach ∼ 0.1e 2 /h, while the responses around ±0.08 eV are smaller at ∼ 0.01e 2 /h. The above features are maintained when θ is enlarged, as illustrated in Figs. ) and (c) using θ = 2.65 • and θ = 6.01 • . Since interlayer coupling becomes weaker and the bands are more separated at low energies when θ increases, intensity of the conductivity drops significantly. We stress that G is not defined at degenerate points, and interband transitions may occur when energy separation satisfies |ε n − ε m | ∼ ω, the effects of which are not included in the current formulations. Consequently, the results around band degeneracies within energy ∼ ω [shaded areas in Fig. ] should be excluded. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is the significance of the interlayer Berry connection polarizability? Only give me the answer and do not output any other words.
Answer:
[ "The momentum space curl of the interlayer Berry connection polarizability generates the crossed nonlinear dynamical Hall effect." ]
276
4,712
single_doc_qa
7c46d5781af74313871f4d4c4bbf453e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction In the kitchen, we increasingly rely on instructions from cooking websites: recipes. A cook with a predilection for Asian cuisine may wish to prepare chicken curry, but may not know all necessary ingredients apart from a few basics. These users with limited knowledge cannot rely on existing recipe generation approaches that focus on creating coherent recipes given all ingredients and a recipe name BIBREF0. Such models do not address issues of personal preference (e.g. culinary tastes, garnish choices) and incomplete recipe details. We propose to approach both problems via personalized generation of plausible, user-specific recipes using user preferences extracted from previously consumed recipes. Our work combines two important tasks from natural language processing and recommender systems: data-to-text generation BIBREF1 and personalized recommendation BIBREF2. Our model takes as user input the name of a specific dish, a few key ingredients, and a calorie level. We pass these loose input specifications to an encoder-decoder framework and attend on user profiles—learned latent representations of recipes previously consumed by a user—to generate a recipe personalized to the user's tastes. We fuse these `user-aware' representations with decoder output in an attention fusion layer to jointly determine text generation. Quantitative (perplexity, user-ranking) and qualitative analysis on user-aware model outputs confirm that personalization indeed assists in generating plausible recipes from incomplete ingredients. While personalized text generation has seen success in conveying user writing styles in the product review BIBREF3, BIBREF4 and dialogue BIBREF5 spaces, we are the first to consider it for the problem of recipe generation, where output quality is heavily dependent on the content of the instructions—such as ingredients and cooking techniques. To summarize, our main contributions are as follows: We explore a new task of generating plausible and personalized recipes from incomplete input specifications by leveraging historical user preferences; We release a new dataset of 180K+ recipes and 700K+ user reviews for this task; We introduce new evaluation strategies for generation quality in instructional texts, centering on quantitative measures of coherence. We also show qualitatively and quantitatively that personalized models generate high-quality and specific recipes that align with historical user preferences. Related Work Large-scale transformer-based language models have shown surprising expressivity and fluency in creative and conditional long-text generation BIBREF6, BIBREF7. Recent works have proposed hierarchical methods that condition on narrative frameworks to generate internally consistent long texts BIBREF8, BIBREF9, BIBREF10. Here, we generate procedurally structured recipes instead of free-form narratives. Recipe generation belongs to the field of data-to-text natural language generation BIBREF1, which sees other applications in automated journalism BIBREF11, question-answering BIBREF12, and abstractive summarization BIBREF13, among others. BIBREF14, BIBREF15 model recipes as a structured collection of ingredient entities acted upon by cooking actions. BIBREF0 imposes a `checklist' attention constraint emphasizing hitherto unused ingredients during generation. BIBREF16 attend over explicit ingredient references in the prior recipe step. Similar hierarchical approaches that infer a full ingredient list to constrain generation will not help personalize recipes, and would be infeasible in our setting due to the potentially unconstrained number of ingredients (from a space of 10K+) in a recipe. We instead learn historical preferences to guide full recipe generation. A recent line of work has explored user- and item-dependent aspect-aware review generation BIBREF3, BIBREF4. This work is related to ours in that it combines contextual language generation with personalization. Here, we attend over historical user preferences from previously consumed recipes to generate recipe content, rather than writing styles. Approach Our model's input specification consists of: the recipe name as a sequence of tokens, a partial list of ingredients, and a caloric level (high, medium, low). It outputs the recipe instructions as a token sequence: $\mathcal {W}_r=\lbrace w_{r,0}, \dots , w_{r,T}\rbrace $ for a recipe $r$ of length $T$. To personalize output, we use historical recipe interactions of a user $u \in \mathcal {U}$. Encoder: Our encoder has three embedding layers: vocabulary embedding $\mathcal {V}$, ingredient embedding $\mathcal {I}$, and caloric-level embedding $\mathcal {C}$. Each token in the (length $L_n$) recipe name is embedded via $\mathcal {V}$; the embedded token sequence is passed to a two-layered bidirectional GRU (BiGRU) BIBREF17, which outputs hidden states for names $\lbrace \mathbf {n}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $, with hidden size $d_h$. Similarly each of the $L_i$ input ingredients is embedded via $\mathcal {I}$, and the embedded ingredient sequence is passed to another two-layered BiGRU to output ingredient hidden states as $\lbrace \mathbf {i}_{\text{enc},j} \in \mathbb {R}^{2d_h}\rbrace $. The caloric level is embedded via $\mathcal {C}$ and passed through a projection layer with weights $W_c$ to generate calorie hidden representation $\mathbf {c}_{\text{enc}} \in \mathbb {R}^{2d_h}$. Ingredient Attention: We apply attention BIBREF18 over the encoded ingredients to use encoder outputs at each decoding time step. We define an attention-score function $\alpha $ with key $K$ and query $Q$: with trainable weights $W_{\alpha }$, bias $\mathbf {b}_{\alpha }$, and normalization term $Z$. At decoding time $t$, we calculate the ingredient context $\mathbf {a}_{t}^{i} \in \mathbb {R}^{d_h}$ as: Decoder: The decoder is a two-layer GRU with hidden state $h_t$ conditioned on previous hidden state $h_{t-1}$ and input token $w_{r, t}$ from the original recipe text. We project the concatenated encoder outputs as the initial decoder hidden state: To bias generation toward user preferences, we attend over a user's previously reviewed recipes to jointly determine the final output token distribution. We consider two different schemes to model preferences from user histories: (1) recipe interactions, and (2) techniques seen therein (defined in data). BIBREF19, BIBREF20, BIBREF21 explore similar schemes for personalized recommendation. Prior Recipe Attention: We obtain the set of prior recipes for a user $u$: $R^+_u$, where each recipe can be represented by an embedding from a recipe embedding layer $\mathcal {R}$ or an average of the name tokens embedded by $\mathcal {V}$. We attend over the $k$-most recent prior recipes, $R^{k+}_u$, to account for temporal drift of user preferences BIBREF22. These embeddings are used in the `Prior Recipe' and `Prior Name' models, respectively. Given a recipe representation $\mathbf {r} \in \mathbb {R}^{d_r}$ (where $d_r$ is recipe- or vocabulary-embedding size depending on the recipe representation) the prior recipe attention context $\mathbf {a}_{t}^{r_u}$ is calculated as Prior Technique Attention: We calculate prior technique preference (used in the `Prior Tech` model) by normalizing co-occurrence between users and techniques seen in $R^+_u$, to obtain a preference vector $\rho _{u}$. Each technique $x$ is embedded via a technique embedding layer $\mathcal {X}$ to $\mathbf {x}\in \mathbb {R}^{d_x}$. Prior technique attention is calculated as where, inspired by copy mechanisms BIBREF23, BIBREF24, we add $\rho _{u,x}$ for technique $x$ to emphasize the attention by the user's prior technique preference. Attention Fusion Layer: We fuse all contexts calculated at time $t$, concatenating them with decoder GRU output and previous token embedding: We then calculate the token probability: and maximize the log-likelihood of the generated sequence conditioned on input specifications and user preferences. fig:ex shows a case where the Prior Name model attends strongly on previously consumed savory recipes to suggest the usage of an additional ingredient (`cilantro'). Recipe Dataset: Food.com We collect a novel dataset of 230K+ recipe texts and 1M+ user interactions (reviews) over 18 years (2000-2018) from Food.com. Here, we restrict to recipes with at least 3 steps, and at least 4 and no more than 20 ingredients. We discard users with fewer than 4 reviews, giving 180K+ recipes and 700K+ reviews, with splits as in tab:recipeixnstats. Our model must learn to generate from a diverse recipe space: in our training data, the average recipe length is 117 tokens with a maximum of 256. There are 13K unique ingredients across all recipes. Rare words dominate the vocabulary: 95% of words appear $<$100 times, accounting for only 1.65% of all word usage. As such, we perform Byte-Pair Encoding (BPE) tokenization BIBREF25, BIBREF26, giving a training vocabulary of 15K tokens across 19M total mentions. User profiles are similarly diverse: 50% of users have consumed $\le $6 recipes, while 10% of users have consumed $>$45 recipes. We order reviews by timestamp, keeping the most recent review for each user as the test set, the second most recent for validation, and the remainder for training (sequential leave-one-out evaluation BIBREF27). We evaluate only on recipes not in the training set. We manually construct a list of 58 cooking techniques from 384 cooking actions collected by BIBREF15; the most common techniques (bake, combine, pour, boil) account for 36.5% of technique mentions. We approximate technique adherence via string match between the recipe text and technique list. Experiments and Results For training and evaluation, we provide our model with the first 3-5 ingredients listed in each recipe. We decode recipe text via top-$k$ sampling BIBREF7, finding $k=3$ to produce satisfactory results. We use a hidden size $d_h=256$ for both the encoder and decoder. Embedding dimensions for vocabulary, ingredient, recipe, techniques, and caloric level are 300, 10, 50, 50, and 5 (respectively). For prior recipe attention, we set $k=20$, the 80th %-ile for the number of user interactions. We use the Adam optimizer BIBREF28 with a learning rate of $10^{-3}$, annealed with a decay rate of 0.9 BIBREF29. We also use teacher-forcing BIBREF30 in all training epochs. In this work, we investigate how leveraging historical user preferences can improve generation quality over strong baselines in our setting. We compare our personalized models against two baselines. The first is a name-based Nearest-Neighbor model (NN). We initially adapted the Neural Checklist Model of BIBREF0 as a baseline; however, we ultimately use a simple Encoder-Decoder baseline with ingredient attention (Enc-Dec), which provides comparable performance and lower complexity. All personalized models outperform baseline in BPE perplexity (tab:metricsontest) with Prior Name performing the best. While our models exhibit comparable performance to baseline in BLEU-1/4 and ROUGE-L, we generate more diverse (Distinct-1/2: percentage of distinct unigrams and bigrams) and acceptable recipes. BLEU and ROUGE are not the most appropriate metrics for generation quality. A `correct' recipe can be written in many ways with the same main entities (ingredients). As BLEU-1/4 capture structural information via n-gram matching, they are not correlated with subjective recipe quality. This mirrors observations from BIBREF31, BIBREF8. We observe that personalized models make more diverse recipes than baseline. They thus perform better in BLEU-1 with more key entities (ingredient mentions) present, but worse in BLEU-4, as these recipes are written in a personalized way and deviate from gold on the phrasal level. Similarly, the `Prior Name' model generates more unigram-diverse recipes than other personalized models and obtains a correspondingly lower BLEU-1 score. Qualitative Analysis: We present sample outputs for a cocktail recipe in tab:samplerecipes, and additional recipes in the appendix. Generation quality progressively improves from generic baseline output to a blended cocktail produced by our best performing model. Models attending over prior recipes explicitly reference ingredients. The Prior Name model further suggests the addition of lemon and mint, which are reasonably associated with previously consumed recipes like coconut mousse and pork skewers. Personalization: To measure personalization, we evaluate how closely the generated text corresponds to a particular user profile. We compute the likelihood of generated recipes using identical input specifications but conditioned on ten different user profiles—one `gold' user who consumed the original recipe, and nine randomly generated user profiles. Following BIBREF8, we expect the highest likelihood for the recipe conditioned on the gold user. We measure user matching accuracy (UMA)—the proportion where the gold user is ranked highest—and Mean Reciprocal Rank (MRR) BIBREF32 of the gold user. All personalized models beat baselines in both metrics, showing our models personalize generated recipes to the given user profiles. The Prior Name model achieves the best UMA and MRR by a large margin, revealing that prior recipe names are strong signals for personalization. Moreover, the addition of attention mechanisms to capture these signals improves language modeling performance over a strong non-personalized baseline. Recipe Level Coherence: A plausible recipe should possess a coherent step order, and we evaluate this via a metric for recipe-level coherence. We use the neural scoring model from BIBREF33 to measure recipe-level coherence for each generated recipe. Each recipe step is encoded by BERT BIBREF34. Our scoring model is a GRU network that learns the overall recipe step ordering structure by minimizing the cosine similarity of recipe step hidden representations presented in the correct and reverse orders. Once pretrained, our scorer calculates the similarity of a generated recipe to the forward and backwards ordering of its corresponding gold label, giving a score equal to the difference between the former and latter. A higher score indicates better step ordering (with a maximum score of 2). tab:coherencemetrics shows that our personalized models achieve average recipe-level coherence scores of 1.78-1.82, surpassing the baseline at 1.77. Recipe Step Entailment: Local coherence is also crucial to a user following a recipe: it is crucial that subsequent steps are logically consistent with prior ones. We model local coherence as an entailment task: predicting the likelihood that a recipe step follows the preceding. We sample several consecutive (positive) and non-consecutive (negative) pairs of steps from each recipe. We train a BERT BIBREF34 model to predict the entailment score of a pair of steps separated by a [SEP] token, using the final representation of the [CLS] token. The step entailment score is computed as the average of scores for each set of consecutive steps in each recipe, averaged over every generated recipe for a model, as shown in tab:coherencemetrics. Human Evaluation: We presented 310 pairs of recipes for pairwise comparison BIBREF8 (details in appendix) between baseline and each personalized model, with results shown in tab:metricsontest. On average, human evaluators preferred personalized model outputs to baseline 63% of the time, confirming that personalized attention improves the semantic plausibility of generated recipes. We also performed a small-scale human coherence survey over 90 recipes, in which 60% of users found recipes generated by personalized models to be more coherent and preferable to those generated by baseline models. Conclusion In this paper, we propose a novel task: to generate personalized recipes from incomplete input specifications and user histories. On a large novel dataset of 180K recipes and 700K reviews, we show that our personalized generative models can generate plausible, personalized, and coherent recipes preferred by human evaluators for consumption. We also introduce a set of automatic coherence measures for instructional texts as well as personalization metrics to support our claims. Our future work includes generating structured representations of recipes to handle ingredient properties, as well as accounting for references to collections of ingredients (e.g. “dry mix"). Acknowledgements. This work is partly supported by NSF #1750063. We thank all reviewers for their constructive suggestions, as well as Rei M., Sujoy P., Alicia L., Eric H., Tim S., Kathy C., Allen C., and Micah I. for their feedback. Appendix ::: Food.com: Dataset Details Our raw data consists of 270K recipes and 1.4M user-recipe interactions (reviews) scraped from Food.com, covering a period of 18 years (January 2000 to December 2018). See tab:int-stats for dataset summary statistics, and tab:samplegk for sample information about one user-recipe interaction and the recipe involved. Appendix ::: Generated Examples See tab:samplechx for a sample recipe for chicken chili and tab:samplewaffle for a sample recipe for sweet waffles. Human Evaluation We prepared a set of 15 pairwise comparisons per evaluation session, and collected 930 pairwise evaluations (310 per personalized model) over 62 sessions. For each pair, users were given a partial recipe specification (name and 3-5 key ingredients), as well as two generated recipes labeled `A' and `B'. One recipe is generated from our baseline encoder-decoder model and one recipe is generated by one of our three personalized models (Prior Tech, Prior Name, Prior Recipe). The order of recipe presentation (A/B) is randomly selected for each question. A screenshot of the user evaluation interface is given in fig:exeval. We ask the user to indicate which recipe they find more coherent, and which recipe best accomplishes the goal indicated by the recipe name. A screenshot of this survey interface is given in fig:exeval2. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the baseline models? Only give me the answer and do not output any other words.
Answer:
[ "name-based Nearest-Neighbor model (NN), Encoder-Decoder baseline with ingredient attention (Enc-Dec)" ]
276
3,828
single_doc_qa
fd5b25d4e8064f618f1832d44d554b9e
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government. A farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash. In November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election. John Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later. Early life English was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden. English attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington. After finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as "Rogernomics") were being implemented. English joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively. Fourth National Government (1990–1999) At the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the "brat pack", the "gang of four", and the "young Turks". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health. First period in cabinet (1996–1999) In early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a "shotgun marriage", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters. As Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to "balance sheets" and "user charges") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted. By early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet. English was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's "Rogernomics" and Ruth Richardson's "Ruthanasia") had focused on "fruitless, theoretical debates" when "people just want to see problems solved". Opposition (1999–2008) After the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent. Leader of the Opposition In October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times "there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension". Aged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as "the worst day of my political life". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support. By late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest. Shadow cabinet roles and deputy leader On 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education). In November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet. Fifth National Government (2008–2017) Deputy Prime Minister and Minister of Finance (2008–2016) At the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third. He was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. The pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK). English acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: "improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending. In April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP. Strong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax. Allowances issue In 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making "preliminary enquiries" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election. Prime Minister (2016–2017) John Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016. English appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same. In February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little. In his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were "natural partners" and would "continue to forge ties" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact. At a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation. On 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand. On 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal. During the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters. At the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October. Opposition (2017–2018) Leader of the Opposition English was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day. Post-premiership In 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets. Political and social views English is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any "liberalisation" of abortion law. In 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, "I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage". In 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes. Personal life English met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons. English is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics. In June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke. Honours In the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State. See also List of New Zealand governments Politics of New Zealand References External links Profile at National Party Profile on Parliament.nz Releases and speeches at Beehive.govt.nz |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- |- 1961 births 21st-century New Zealand politicians Candidates in the 2017 New Zealand general election Deputy Prime Ministers of New Zealand Leaders of the Opposition (New Zealand) Living people Members of the Cabinet of New Zealand Members of the New Zealand House of Representatives New Zealand farmers New Zealand finance ministers New Zealand list MPs New Zealand MPs for South Island electorates New Zealand National Party MPs New Zealand National Party leaders New Zealand Roman Catholics New Zealand people of Irish descent People educated at St. Patrick's College, Silverstream People from Dipton, New Zealand People from Lumsden, New Zealand Prime Ministers of New Zealand University of Otago alumni Victoria University of Wellington alumni Knights Companion of the New Zealand Order of Merit New Zealand politicians awarded knighthoods Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: In which electorate was Simon English elected to the New Zealand Parliament? Only give me the answer and do not output any other words.
Answer:
[ "The Wallace electorate." ]
276
4,671
single_doc_qa
16c70790985b405494a16cb7a1a5d888
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Neural networks have been successfully used to describe images with text using sequence-to-sequence models BIBREF0. However, the results are simple and dry captions which are one or two phrases long. Humans looking at a painting see more than just objects. Paintings stimulate sentiments, metaphors and stories as well. Therefore, our goal is to have a neural network describe the painting artistically in a style of choice. As a proof of concept, we present a model which generates Shakespearean prose for a given painting as shown in Figure FIGREF1. Accomplishing this task is difficult with traditional sequence to sequence models since there does not exist a large collection of Shakespearean prose which describes paintings: Shakespeare's works describes a single painting shown in Figure FIGREF3. Fortunately we have a dataset of modern English poems which describe images BIBREF1 and line-by-line modern paraphrases of Shakespeare's plays BIBREF2. Our solution is therefore to combine two separately trained models to synthesize Shakespearean prose for a given painting. Introduction ::: Related work A general end-to-end approach to sequence learning BIBREF3 makes minimal assumptions on the sequence structure. This model is widely used in tasks such as machine translation, text summarization, conversational modeling, and image captioning. A generative model using a deep recurrent architecture BIBREF0 has also beeen used for generating phrases describing an image. The task of synthesizing multiple lines of poetry for a given image BIBREF1 is accomplished by extracting poetic clues from images. Given the context image, the network associates image attributes with poetic descriptions using a convolutional neural net. The poem is generated using a recurrent neural net which is trained using multi-adversarial training via policy gradient. Transforming text from modern English to Shakespearean English using text "style transfer" is challenging. An end to end approach using a sequence-to-sequence model over a parallel text corpus BIBREF2 has been proposed based on machine translation. In the absence of a parallel text corpus, generative adversarial networks (GANs) have been used, which simultaneously train two models: a generative model which captures the data distribution, and a discriminative model which evaluates the performance of the generator. Using a target domain language model as a discriminator has also been employed BIBREF4, providing richer and more stable token-level feedback during the learning process. A key challenge in both image and text style transfer is separating content from style BIBREF5, BIBREF6, BIBREF7. Cross-aligned auto-encoder models have focused on style transfer using non-parallel text BIBREF7. Recently, a fine grained model for text style transfer has been proposed BIBREF8 which controls several factors of variation in textual data by using back-translation. This allows control over multiple attributes, such as gender and sentiment, and fine-grained control over the trade-off between content and style. Methods We use a total three datasets: two datasets for generating an English poem from an image, and Shakespeare plays and their English translations for text style transfer. We train a model for generating poems from images based on two datasets BIBREF1. The first dataset consists of image and poem pairs, namely a multi-modal poem dataset (MultiM-Poem), and the second dataset is a large poem corpus, namely a uni-modal poem dataset (UniM-Poem). The image and poem pairs are extended by adding the nearest three neighbor poems from the poem corpus without redundancy, and an extended image and poem pair dataset is constructed and denoted as MultiM-Poem(Ex)BIBREF1. We use a collection of line-by-line modern paraphrases for 16 of Shakespeare’s plays BIBREF2, for training a style transfer network from English poems to Shakespearean prose. We use 18,395 sentences from the training data split. We keep 1,218 sentences in the validation data set and 1,462 sentences in our test set. Methods ::: Image To Poem Actor-Critic Model For generating a poem from images we use an existing actor-critic architecture BIBREF1. This involves 3 parallel CNNs: an object CNN, sentiment CNN, and scene CNN, for feature extraction. These features are combined with a skip-thought model which provides poetic clues, which are then fed into a sequence-to-sequence model trained by policy gradient with 2 discriminator networks for rewards. This as a whole forms a pipeline that takes in an image and outputs a poem as shown on the top left of Figure FIGREF4. A CNN-RNN generative model acts as an agent. The parameters of this agent define a policy whose execution determines which word is selected as an action. When the agent selects all words in a poem, it receives a reward. Two discriminative networks, shown on the top right of Figure FIGREF4, are defined to serve as rewards concerning whether the generated poem properly describes the input image and whether the generated poem is poetic. The goal of the poem generation model is to generate a sequence of words as a poem for an image to maximize the expected return. Methods ::: Shakespearizing Poetic Captions For Shakespearizing modern English texts, we experimented with various types of sequence to sequence models. Since the size of the parallel translation data available is small, we leverage a dictionary providing a mapping between Shakespearean words and modern English words to retrofit pre-trained word embeddings. Incorporating this extra information improves the translation task. The large number of shared word types between the source and target sentences indicates that sharing the representation between them is beneficial. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with Attention We use a sequence-to-sequence model which consists of a single layer unidrectional LSTM encoder and a single layer LSTM decoder and pre-trained retrofitted word embeddings shared between source and target sentences. We experimented with two different types of attention: global attention BIBREF9, in which the model makes use of the output from the encoder and decoder for the current time step only, and Bahdanau attention BIBREF10, where computing attention requires the output of the decoder from the prior time step. We found that global attention performs better in practice for our task of text style transfer. Methods ::: Shakespearizing Poetic Captions ::: Seq2Seq with a Pointer Network Since a pair of corresponding Shakespeare and modern English sentences have significant vocabulary overlap we extend the sequence-to-sequence model mentioned above using pointer networks BIBREF11 that provide location based attention and have been used to enable copying of tokens directly from the input. Moreover, there are lot of proper nouns and rare words which might not be predicted by a vanilla sequence to sequence model. Methods ::: Shakespearizing Poetic Captions ::: Prediction For both seq2seq models, we use the attention matrices returned at each decoder time step during inference, to compute the next word in the translated sequence if the decoder output at the current time step is the UNK token. We replace the UNKs in the target output with the highest aligned, maximum attention, source word. The seq2seq model with global attention gives the best results with an average target BLEU score of 29.65 on the style transfer dataset, compared with an average target BLEU score of 26.97 using the seq2seq model with pointer networks. Results We perform a qualitative analysis of the Shakespearean prose generated for the input paintings. We conducted a survey, in which we presented famous paintings including those shown in Figures FIGREF1 and FIGREF10 and the corresponding Shakespearean prose generated by the model, and asked 32 students to rate them on the basis of content, creativity and similarity to Shakespearean style on a Likert scale of 1-5. Figure FIGREF12 shows the result of our human evaluation. The average content score across the paintings is 3.7 which demonstrates that the prose generated is relevant to the painting. The average creativity score is 3.9 which demonstrates that the model captures more than basic objects in the painting successfully using poetic clues in the scene. The average style score is 3.9 which demonstrates that the prose generated is perceived to be in the style of Shakespeare. We also perform a quantitative analysis of style transfer by generating BLEU scores for the model output using the style transfer dataset. The variation of the BLEU scores with the source sentence lengths is shown in Figure FIGREF11. As expected, the BLEU scores decrease with increase in source sentence lengths. Results ::: Implementation All models were trained on Google Colab with a single GPU using Python 3.6 and Tensorflow 2.0. The number of hidden units for the encoder and decoder is 1,576 and 256 for seq2seq with global attention and seq2seq with pointer networks respectively. Adam optimizer was used with the default learning rate of 0.001. The model was trained for 25 epochs. We use pre-trained retrofitted word embeddings of dimension 192. Results ::: Limitations Since we do not have an end-to-end dataset, the generated English poem may not work well with Shakespeare style transfer as shown in Figure FIGREF12 for "Starry Night" with a low average content score. This happens when the style transfer dataset does not have similar words in the training set of sentences. A solution would be to expand the style transfer dataset, for a better representation of the poem data. Results ::: Conclusions and Future Work In conclusion, combining two pipelines with an intermediate representation works well in practice. We observe that a CNN-RNN based image-to-poem net combined with a seq2seq model with parallel text corpus for text style transfer synthesizes Shakespeare-style prose for a given painting. For the seq2seq model used, we observe that it performs better in practice using global attention as compared with local attention. We make our models and code publicly available BIBREF12. In future work we would like to experiment with GANs in the absence of non-parallel datasets, so that we can use varied styles for text style transfer. We would also like to experiment with cross aligned auto-encoders, which form a latent content representation, to efficiently separate style and content. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What models are used for painting embedding and what for language style transfer? Only give me the answer and do not output any other words.
Answer:
[ "generating a poem from images we use an existing actor-critic architecture, various types of sequence to sequence models" ]
276
2,091
single_doc_qa
951da80ff2aa47de8297b03fd5ccd525
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction This work is licenced under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/ Deep neural networks have been widely used in text classification and have achieved promising results BIBREF0 , BIBREF1 , BIBREF2 . Most focus on content information and use models such as convolutional neural networks (CNN) BIBREF3 or recursive neural networks BIBREF4 . However, for user-generated posts on social media like Facebook or Twitter, there is more information that should not be ignored. On social media platforms, a user can act either as the author of a post or as a reader who expresses his or her comments about the post. In this paper, we classify posts taking into account post authorship, likes, topics, and comments. In particular, users and their “likes” hold strong potential for text mining. For example, given a set of posts that are related to a specific topic, a user's likes and dislikes provide clues for stance labeling. From a user point of view, users with positive attitudes toward the issue leave positive comments on the posts with praise or even just the post's content; from a post point of view, positive posts attract users who hold positive stances. We also investigate the influence of topics: different topics are associated with different stance labeling tendencies and word usage. For example we discuss women's rights and unwanted babies on the topic of abortion, but we criticize medicine usage or crime when on the topic of marijuana BIBREF5 . Even for posts on a specific topic like nuclear power, a variety of arguments are raised: green energy, radiation, air pollution, and so on. As for comments, we treat them as additional text information. The arguments in the comments and the commenters (the users who leave the comments) provide hints on the post's content and further facilitate stance classification. In this paper, we propose the user-topic-comment neural network (UTCNN), a deep learning model that utilizes user, topic, and comment information. We attempt to learn user and topic representations which encode user interactions and topic influences to further enhance text classification, and we also incorporate comment information. We evaluate this model on a post stance classification task on forum-style social media platforms. The contributions of this paper are as follows: 1. We propose UTCNN, a neural network for text in modern social media channels as well as legacy social media, forums, and message boards — anywhere that reveals users, their tastes, as well as their replies to posts. 2. When classifying social media post stances, we leverage users, including authors and likers. User embeddings can be generated even for users who have never posted anything. 3. We incorporate a topic model to automatically assign topics to each post in a single topic dataset. 4. We show that overall, the proposed method achieves the highest performance in all instances, and that all of the information extracted, whether users, topics, or comments, still has its contributions. Extra-Linguistic Features for Stance Classification In this paper we aim to use text as well as other features to see how they complement each other in a deep learning model. In the stance classification domain, previous work has showed that text features are limited, suggesting that adding extra-linguistic constraints could improve performance BIBREF6 , BIBREF7 , BIBREF8 . For example, Hasan and Ng as well as Thomas et al. require that posts written by the same author have the same stance BIBREF9 , BIBREF10 . The addition of this constraint yields accuracy improvements of 1–7% for some models and datasets. Hasan and Ng later added user-interaction constraints and ideology constraints BIBREF7 : the former models the relationship among posts in a sequence of replies and the latter models inter-topic relationships, e.g., users who oppose abortion could be conservative and thus are likely to oppose gay rights. For work focusing on online forum text, since posts are linked through user replies, sequential labeling methods have been used to model relationships between posts. For example, Hasan and Ng use hidden Markov models (HMMs) to model dependent relationships to the preceding post BIBREF9 ; Burfoot et al. use iterative classification to repeatedly generate new estimates based on the current state of knowledge BIBREF11 ; Sridhar et al. use probabilistic soft logic (PSL) to model reply links via collaborative filtering BIBREF12 . In the Facebook dataset we study, we use comments instead of reply links. However, as the ultimate goal in this paper is predicting not comment stance but post stance, we treat comments as extra information for use in predicting post stance. Deep Learning on Extra-Linguistic Features In recent years neural network models have been applied to document sentiment classification BIBREF13 , BIBREF4 , BIBREF14 , BIBREF15 , BIBREF2 . Text features can be used in deep networks to capture text semantics or sentiment. For example, Dong et al. use an adaptive layer in a recursive neural network for target-dependent Twitter sentiment analysis, where targets are topics such as windows 7 or taylor swift BIBREF16 , BIBREF17 ; recursive neural tensor networks (RNTNs) utilize sentence parse trees to capture sentence-level sentiment for movie reviews BIBREF4 ; Le and Mikolov predict sentiment by using paragraph vectors to model each paragraph as a continuous representation BIBREF18 . They show that performance can thus be improved by more delicate text models. Others have suggested using extra-linguistic features to improve the deep learning model. The user-word composition vector model (UWCVM) BIBREF19 is inspired by the possibility that the strength of sentiment words is user-specific; to capture this they add user embeddings in their model. In UPNN, a later extension, they further add a product-word composition as product embeddings, arguing that products can also show different tendencies of being rated or reviewed BIBREF20 . Their addition of user information yielded 2–10% improvements in accuracy as compared to the above-mentioned RNTN and paragraph vector methods. We also seek to inject user information into the neural network model. In comparison to the research of Tang et al. on sentiment classification for product reviews, the difference is two-fold. First, we take into account multiple users (one author and potentially many likers) for one post, whereas only one user (the reviewer) is involved in a review. Second, we add comment information to provide more features for post stance classification. None of these two factors have been considered previously in a deep learning model for text stance classification. Therefore, we propose UTCNN, which generates and utilizes user embeddings for all users — even for those who have not authored any posts — and incorporates comments to further improve performance. Method In this section, we first describe CNN-based document composition, which captures user- and topic-dependent document-level semantic representation from word representations. Then we show how to add comment information to construct the user-topic-comment neural network (UTCNN). User- and Topic-dependent Document Composition As shown in Figure FIGREF4 , we use a general CNN BIBREF3 and two semantic transformations for document composition . We are given a document with an engaged user INLINEFORM0 , a topic INLINEFORM1 , and its composite INLINEFORM2 words, each word INLINEFORM3 of which is associated with a word embedding INLINEFORM4 where INLINEFORM5 is the vector dimension. For each word embedding INLINEFORM6 , we apply two dot operations as shown in Equation EQREF6 : DISPLAYFORM0 where INLINEFORM0 models the user reading preference for certain semantics, and INLINEFORM1 models the topic semantics; INLINEFORM2 and INLINEFORM3 are the dimensions of transformed user and topic embeddings respectively. We use INLINEFORM4 to model semantically what each user prefers to read and/or write, and use INLINEFORM5 to model the semantics of each topic. The dot operation of INLINEFORM6 and INLINEFORM7 transforms the global representation INLINEFORM8 to a user-dependent representation. Likewise, the dot operation of INLINEFORM9 and INLINEFORM10 transforms INLINEFORM11 to a topic-dependent representation. After the two dot operations on INLINEFORM0 , we have user-dependent and topic-dependent word vectors INLINEFORM1 and INLINEFORM2 , which are concatenated to form a user- and topic-dependent word vector INLINEFORM3 . Then the transformed word embeddings INLINEFORM4 are used as the CNN input. Here we apply three convolutional layers on the concatenated transformed word embeddings INLINEFORM5 : DISPLAYFORM0 where INLINEFORM0 is the index of words; INLINEFORM1 is a non-linear activation function (we use INLINEFORM2 ); INLINEFORM5 is the convolutional filter with input length INLINEFORM6 and output length INLINEFORM7 , where INLINEFORM8 is the window size of the convolutional operation; and INLINEFORM9 and INLINEFORM10 are the output and bias of the convolution layer INLINEFORM11 , respectively. In our experiments, the three window sizes INLINEFORM12 in the three convolution layers are one, two, and three, encoding unigram, bigram, and trigram semantics accordingly. After the convolutional layer, we add a maximum pooling layer among convolutional outputs to obtain the unigram, bigram, and trigram n-gram representations. This is succeeded by an average pooling layer for an element-wise average of the three maximized convolution outputs. UTCNN Model Description Figure FIGREF10 illustrates the UTCNN model. As more than one user may interact with a given post, we first add a maximum pooling layer after the user matrix embedding layer and user vector embedding layer to form a moderator matrix embedding INLINEFORM0 and a moderator vector embedding INLINEFORM1 for moderator INLINEFORM2 respectively, where INLINEFORM3 is used for the semantic transformation in the document composition process, as mentioned in the previous section. The term moderator here is to denote the pseudo user who provides the overall semantic/sentiment of all the engaged users for one document. The embedding INLINEFORM4 models the moderator stance preference, that is, the pattern of the revealed user stance: whether a user is willing to show his preference, whether a user likes to show impartiality with neutral statements and reasonable arguments, or just wants to show strong support for one stance. Ideally, the latent user stance is modeled by INLINEFORM5 for each user. Likewise, for topic information, a maximum pooling layer is added after the topic matrix embedding layer and topic vector embedding layer to form a joint topic matrix embedding INLINEFORM6 and a joint topic vector embedding INLINEFORM7 for topic INLINEFORM8 respectively, where INLINEFORM9 models the semantic transformation of topic INLINEFORM10 as in users and INLINEFORM11 models the topic stance tendency. The latent topic stance is also modeled by INLINEFORM12 for each topic. As for comments, we view them as short documents with authors only but without likers nor their own comments. Therefore we apply document composition on comments although here users are commenters (users who comment). It is noticed that the word embeddings INLINEFORM0 for the same word in the posts and comments are the same, but after being transformed to INLINEFORM1 in the document composition process shown in Figure FIGREF4 , they might become different because of their different engaged users. The output comment representation together with the commenter vector embedding INLINEFORM2 and topic vector embedding INLINEFORM3 are concatenated and a maximum pooling layer is added to select the most important feature for comments. Instead of requiring that the comment stance agree with the post, UTCNN simply extracts the most important features of the comment contents; they could be helpful, whether they show obvious agreement or disagreement. Therefore when combining comment information here, the maximum pooling layer is more appropriate than other pooling or merging layers. Indeed, we believe this is one reason for UTCNN's performance gains. Finally, the pooled comment representation, together with user vector embedding INLINEFORM0 , topic vector embedding INLINEFORM1 , and document representation are fed to a fully connected network, and softmax is applied to yield the final stance label prediction for the post. Experiment We start with the experimental dataset and then describe the training process as well as the implementation of the baselines. We also implement several variations to reveal the effects of features: authors, likers, comment, and commenters. In the results section we compare our model with related work. Dataset We tested the proposed UTCNN on two different datasets: FBFans and CreateDebate. FBFans is a privately-owned, single-topic, Chinese, unbalanced, social media dataset, and CreateDebate is a public, multiple-topic, English, balanced, forum dataset. Results using these two datasets show the applicability and superiority for different topics, languages, data distributions, and platforms. The FBFans dataset contains data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs. There are a total of 2,496 authors, 505,137 likers, 33,686 commenters, and 505,412 unique users. Two annotators were asked to take into account only the post content to label the stance of the posts in the whole dataset as supportive, neutral, or unsupportive (hereafter denoted as Sup, Neu, and Uns). Sup/Uns posts were those in support of or against anti-reconstruction; Neu posts were those evincing a neutral standpoint on the topic, or were irrelevant. Raw agreement between annotators is 0.91, indicating high agreement. Specifically, Cohen’s Kappa for Neu and not Neu labeling is 0.58 (moderate), and for Sup or Uns labeling is 0.84 (almost perfect). Posts with inconsistent labels were filtered out, and the development and testing sets were randomly selected from what was left. Posts in the development and testing sets involved at least one user who appeared in the training set. The number of posts for each stance is shown on the left-hand side of Table TABREF12 . About twenty percent of the posts were labeled with a stance, and the number of supportive (Sup) posts was much larger than that of the unsupportive (Uns) ones: this is thus highly skewed data, which complicates stance classification. On average, 161.1 users were involved in one post. The maximum was 23,297 and the minimum was one (the author). For comments, on average there were 3 comments per post. The maximum was 1,092 and the minimum was zero. To test whether the assumption of this paper – posts attract users who hold the same stance to like them – is reliable, we examine the likes from authors of different stances. Posts in FBFans dataset are used for this analysis. We calculate the like statistics of each distinct author from these 32,595 posts. As the numbers of authors in the Sup, Neu and Uns stances are largely imbalanced, these numbers are normalized by the number of users of each stance. Table TABREF13 shows the results. Posts with stances (i.e., not neutral) attract users of the same stance. Neutral posts also attract both supportive and neutral users, like what we observe in supportive posts, but just the neutral posts can attract even more neutral likers. These results do suggest that users prefer posts of the same stance, or at least posts of no obvious stance which might cause annoyance when reading, and hence support the user modeling in our approach. The CreateDebate dataset was collected from an English online debate forum discussing four topics: abortion (ABO), gay rights (GAY), Obama (OBA), and marijuana (MAR). The posts are annotated as for (F) and against (A). Replies to posts in this dataset are also labeled with stance and hence use the same data format as posts. The labeling results are shown in the right-hand side of Table TABREF12 . We observe that the dataset is more balanced than the FBFans dataset. In addition, there are 977 unique users in the dataset. To compare with Hasan and Ng's work, we conducted five-fold cross-validation and present the annotation results as the average number of all folds BIBREF9 , BIBREF5 . The FBFans dataset has more integrated functions than the CreateDebate dataset; thus our model can utilize all linguistic and extra-linguistic features. For the CreateDebate dataset, on the other hand, the like and comment features are not available (as there is a stance label for each reply, replies are evaluated as posts as other previous work) but we still implemented our model using the content, author, and topic information. Settings In the UTCNN training process, cross-entropy was used as the loss function and AdaGrad as the optimizer. For FBFans dataset, we learned the 50-dimensional word embeddings on the whole dataset using GloVe BIBREF21 to capture the word semantics; for CreateDebate dataset we used the publicly available English 50-dimensional word embeddings, pre-trained also using GloVe. These word embeddings were fixed in the training process. The learning rate was set to 0.03. All user and topic embeddings were randomly initialized in the range of [-0.1 0.1]. Matrix embeddings for users and topics were sized at 250 ( INLINEFORM0 ); vector embeddings for users and topics were set to length 10. We applied the LDA topic model BIBREF22 on the FBFans dataset to determine the latent topics with which to build topic embeddings, as there is only one general known topic: nuclear power plants. We learned 100 latent topics and assigned the top three topics for each post. For the CreateDebate dataset, which itself constitutes four topics, the topic labels for posts were used directly without additionally applying LDA. For the FBFans data we report class-based f-scores as well as the macro-average f-score ( INLINEFORM0 ) shown in equation EQREF19 . DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 are the average precision and recall of the three class. We adopted the macro-average f-score as the evaluation metric for the overall performance because (1) the experimental dataset is severely imbalanced, which is common for contentious issues; and (2) for stance classification, content in minor-class posts is usually more important for further applications. For the CreateDebate dataset, accuracy was adopted as the evaluation metric to compare the results with related work BIBREF7 , BIBREF9 , BIBREF12 . Baselines We pit our model against the following baselines: 1) SVM with unigram, bigram, and trigram features, which is a standard yet rather strong classifier for text features; 2) SVM with average word embedding, where a document is represented as a continuous representation by averaging the embeddings of the composite words; 3) SVM with average transformed word embeddings (the INLINEFORM0 in equation EQREF6 ), where a document is represented as a continuous representation by averaging the transformed embeddings of the composite words; 4) two mature deep learning models on text classification, CNN BIBREF3 and Recurrent Convolutional Neural Networks (RCNN) BIBREF0 , where the hyperparameters are based on their work; 5) the above SVM and deep learning models with comment information; 6) UTCNN without user information, representing a pure-text CNN model where we use the same user matrix and user embeddings INLINEFORM1 and INLINEFORM2 for each user; 7) UTCNN without the LDA model, representing how UTCNN works with a single-topic dataset; 8) UTCNN without comments, in which the model predicts the stance label given only user and topic information. All these models were trained on the training set, and parameters as well as the SVM kernel selections (linear or RBF) were fine-tuned on the development set. Also, we adopt oversampling on SVMs, CNN and RCNN because the FBFans dataset is highly imbalanced. Results on FBFans Dataset In Table TABREF22 we show the results of UTCNN and the baselines on the FBFans dataset. Here Majority yields good performance on Neu since FBFans is highly biased to the neutral class. The SVM models perform well on Sup and Neu but perform poorly for Uns, showing that content information in itself is insufficient to predict stance labels, especially for the minor class. With the transformed word embedding feature, SVM can achieve comparable performance as SVM with n-gram feature. However, the much fewer feature dimension of the transformed word embedding makes SVM with word embeddings a more efficient choice for modeling the large scale social media dataset. For the CNN and RCNN models, they perform slightly better than most of the SVM models but still, the content information is insufficient to achieve a good performance on the Uns posts. As to adding comment information to these models, since the commenters do not always hold the same stance as the author, simply adding comments and post contents together merely adds noise to the model. Among all UTCNN variations, we find that user information is most important, followed by topic and comment information. UTCNN without user information shows results similar to SVMs — it does well for Sup and Neu but detects no Uns. Its best f-scores on both Sup and Neu among all methods show that with enough training data, content-based models can perform well; at the same time, the lack of user information results in too few clues for minor-class posts to either predict their stance directly or link them to other users and posts for improved performance. The 17.5% improvement when adding user information suggests that user information is especially useful when the dataset is highly imbalanced. All models that consider user information predict the minority class successfully. UCTNN without topic information works well but achieves lower performance than the full UTCNN model. The 4.9% performance gain brought by LDA shows that although it is satisfactory for single topic datasets, adding that latent topics still benefits performance: even when we are discussing the same topic, we use different arguments and supporting evidence. Lastly, we get 4.8% improvement when adding comment information and it achieves comparable performance to UTCNN without topic information, which shows that comments also benefit performance. For platforms where user IDs are pixelated or otherwise hidden, adding comments to a text model still improves performance. In its integration of user, content, and comment information, the full UTCNN produces the highest f-scores on all Sup, Neu, and Uns stances among models that predict the Uns class, and the highest macro-average f-score overall. This shows its ability to balance a biased dataset and supports our claim that UTCNN successfully bridges content and user, topic, and comment information for stance classification on social media text. Another merit of UTCNN is that it does not require a balanced training data. This is supported by its outperforming other models though no oversampling technique is applied to the UTCNN related experiments as shown in this paper. Thus we can conclude that the user information provides strong clues and it is still rich even in the minority class. We also investigate the semantic difference when a user acts as an author/liker or a commenter. We evaluated a variation in which all embeddings from the same user were forced to be identical (this is the UTCNN shared user embedding setting in Table TABREF22 ). This setting yielded only a 2.5% improvement over the model without comments, which is not statistically significant. However, when separating authors/likers and commenters embeddings (i.e., the UTCNN full model), we achieved much greater improvements (4.8%). We attribute this result to the tendency of users to use different wording for different roles (for instance author vs commenter). This is observed when the user, acting as an author, attempts to support her argument against nuclear power by using improvements in solar power; when acting as a commenter, though, she interacts with post contents by criticizing past politicians who supported nuclear power or by arguing that the proposed evacuation plan in case of a nuclear accident is ridiculous. Based on this finding, in the final UTCNN setting we train two user matrix embeddings for one user: one for the author/liker role and the other for the commenter role. Results on CreateDebate Dataset Table TABREF24 shows the results of UTCNN, baselines as we implemented on the FBFans datset and related work on the CreateDebate dataset. We do not adopt oversampling on these models because the CreateDebate dataset is almost balanced. In previous work, integer linear programming (ILP) or linear-chain conditional random fields (CRFs) were proposed to integrate text features, author, ideology, and user-interaction constraints, where text features are unigram, bigram, and POS-dependencies; the author constraint tends to require that posts from the same author for the same topic hold the same stance; the ideology constraint aims to capture inferences between topics for the same author; the user-interaction constraint models relationships among posts via user interactions such as replies BIBREF7 , BIBREF9 . The SVM with n-gram or average word embedding feature performs just similar to the majority. However, with the transformed word embedding, it achieves superior results. It shows that the learned user and topic embeddings really capture the user and topic semantics. This finding is not so obvious in the FBFans dataset and it might be due to the unfavorable data skewness for SVM. As for CNN and RCNN, they perform slightly better than most SVMs as we found in Table TABREF22 for FBFans. Compared to the ILP BIBREF7 and CRF BIBREF9 methods, the UTCNN user embeddings encode author and user-interaction constraints, where the ideology constraint is modeled by the topic embeddings and text features are modeled by the CNN. The significant improvement achieved by UTCNN suggests the latent representations are more effective than overt model constraints. The PSL model BIBREF12 jointly labels both author and post stance using probabilistic soft logic (PSL) BIBREF23 by considering text features and reply links between authors and posts as in Hasan and Ng's work. Table TABREF24 reports the result of their best AD setting, which represents the full joint stance/disagreement collective model on posts and is hence more relevant to UTCNN. In contrast to their model, the UTCNN user embeddings represent relationships between authors, but UTCNN models do not utilize link information between posts. Though the PSL model has the advantage of being able to jointly label the stances of authors and posts, its performance on posts is lower than the that for the ILP or CRF models. UTCNN significantly outperforms these models on posts and has the potential to predict user stances through the generated user embeddings. For the CreateDebate dataset, we also evaluated performance when not using topic embeddings or user embeddings; as replies in this dataset are viewed as posts, the setting without comment embeddings is not available. Table TABREF24 shows the same findings as Table TABREF22 : the 21% improvement in accuracy demonstrates that user information is the most vital. This finding also supports the results in the related work: user constraints are useful and can yield 11.2% improvement in accuracy BIBREF7 . Further considering topic information yields 3.4% improvement, suggesting that knowing the subject of debates provides useful information. In sum, Table TABREF22 together with Table TABREF24 show that UTCNN achieves promising performance regardless of topic, language, data distribution, and platform. Conclusion We have proposed UTCNN, a neural network model that incorporates user, topic, content and comment information for stance classification on social media texts. UTCNN learns user embeddings for all users with minimum active degree, i.e., one post or one like. Topic information obtained from the topic model or the pre-defined labels further improves the UTCNN model. In addition, comment information provides additional clues for stance classification. We have shown that UTCNN achieves promising and balanced results. In the future we plan to explore the effectiveness of the UTCNN user embeddings for author stance classification. Acknowledgements Research of this paper was partially supported by Ministry of Science and Technology, Taiwan, under the contract MOST 104-2221-E-001-024-MY2. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How many layers does the UTCNN model have? Only give me the answer and do not output any other words.
Answer:
[ "eight layers" ]
276
5,789
single_doc_qa
7f5ccd9d77184291836ba33a9ec2e6df
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Deep learning systems have shown a lot of promise for extractive Question Answering (QA), with performance comparable to humans when large scale data is available. However, practitioners looking to build QA systems for specific applications may not have the resources to collect tens of thousands of questions on corpora of their choice. At the same time, state-of-the-art machine reading systems do not lend well to low-resource QA settings where the number of labeled question-answer pairs are limited (c.f. Table 2 ). Semi-supervised QA methods like BIBREF0 aim to improve this performance by leveraging unlabeled data which is easier to collect. In this work, we present a semi-supervised QA system which requires the end user to specify a set of base documents and only a small set of question-answer pairs over a subset of these documents. Our proposed system consists of three stages. First, we construct cloze-style questions (predicting missing spans of text) from the unlabeled corpus; next, we use the generated clozes to pre-train a powerful neural network model for extractive QA BIBREF1 , BIBREF2 ; and finally, we fine-tune the model on the small set of provided QA pairs. Our cloze construction process builds on a typical writing phenomenon and document structure: an introduction precedes and summarizes the main body of the article. Many large corpora follow such a structure, including Wikipedia, academic papers, and news articles. We hypothesize that we can benefit from the un-annotated corpora to better answer various questions – at least ones that are lexically similar to the content in base documents and directly require factual information. We apply the proposed system on three datasets from different domains – SQuAD BIBREF3 , TriviaQA-Web BIBREF4 and the BioASQ challenge BIBREF5 . We observe significant improvements in a low-resource setting across all three datasets. For SQuAD and TriviaQA, we attain an F1 score of more than 50% by merely using 1% of the training data. Our system outperforms the approaches for semi-supervised QA presented in BIBREF0 , and a baseline which uses the same unlabeled data but with a language modeling objective for pretraining. In the BioASQ challenge, we outperform the best performing system from previous year's challenge, improving over a baseline which does transfer learning from the SQuAD dataset. Our analysis reveals that questions which ask for factual information and match to specific parts of the context documents benefit the most from pretraining on automatically constructed clozes. Related Work Semi-supervised learning augments the labeled dataset $L$ with a potentially larger unlabeled dataset $U$ . BIBREF0 presented a model, GDAN, which trained an auxiliary neural network to generate questions from passages by reinforcement learning, and augment the labeled dataset with the generated questions to train the QA model. Here we use a much simpler heuristic to generate the auxiliary questions, which also turns out to be more effective as we show superior performance compared to GDAN. Several approaches have been suggested for generating natural questions BIBREF6 , BIBREF7 , BIBREF8 , however none of them show a significant improvement of using the generated questions in a semi-supervised setting. Recent papers also use unlabeled data for QA by training large language models and extracting contextual word vectors from them to input to the QA model BIBREF9 , BIBREF10 , BIBREF11 . The applicability of this method in the low-resource setting is unclear as the extra inputs increase the number of parameters in the QA model, however, our pretraining can be easily applied to these models as well. Domain adaptation (and Transfer learning) leverage existing large scale datasets from a source domain (or task) to improve performance on a target domain (or task). For deep learning and QA, a common approach is to pretrain on the source dataset and then fine-tune on the target dataset BIBREF12 , BIBREF13 . BIBREF14 used SQuAD as a source for the target BioASQ dataset, and BIBREF15 used Book Test BIBREF16 as source for the target SQuAD dataset. BIBREF17 transfer learned model layers from the tasks of sequence labeling, text classification and relation classification to show small improvements on SQuAD. All these works use manually curated source datatset, which in themselves are expensive to collect. Instead, we show that it is possible to automatically construct the source dataset from the same domain as the target, which turns out to be more beneficial in terms of performance as well (c.f. Section "Experiments & Results" ). Several cloze datasets have been proposed in the literature which use heuristics for construction BIBREF18 , BIBREF19 , BIBREF20 . We further see the usability of such a dataset in a semi-supervised setting. Methodology Our system comprises of following three steps: Cloze generation: Most of the documents typically follow a template, they begin with an introduction that provides an overview and a brief summary for what is to follow. We assume such a structure while constructing our cloze style questions. When there is no clear demarcation, we treat the first $K\%$ (hyperparameter, in our case 20%) of the document as the introduction. While noisy, this heuristic generates a large number of clozes given any corpus, which we found to be beneficial for semi-supervised learning despite the noise. We use a standard NLP pipeline based on Stanford CoreNLP (for SQuAD, TrivaQA and PubMed) and the BANNER Named Entity Recognizer (only for PubMed articles) to identify entities and phrases. Assume that a document comprises of introduction sentences $\lbrace q_1, q_2, ... q_n\rbrace $ , and the remaining passages $\lbrace p_1, p_2, .. p_m\rbrace $ . Additionally, let's say that each sentence $q_i$ in introduction is composed of words $\lbrace w_1, w_2, ... w_{l_{q_i}}\rbrace $ , where $l_{q_i}$ is the length of $q_i$ . We consider a $\text{match} (q_i, p_j)$ , if there is an exact string match of a sequence of words $\lbrace w_k, w_{k+1}, .. w_{l_{q_i}}\rbrace $ between the sentence $q_i$ and passage $p_j$ . If this sequence is either a noun phrase, verb phrase, adjective phrase or a named entity in $\lbrace p_1, p_2, .. p_m\rbrace $0 , as recognized by CoreNLP or BANNER, we select it as an answer span $\lbrace p_1, p_2, .. p_m\rbrace $1 . Additionally, we use $\lbrace p_1, p_2, .. p_m\rbrace $2 as the passage $\lbrace p_1, p_2, .. p_m\rbrace $3 and form a cloze question $\lbrace p_1, p_2, .. p_m\rbrace $4 from the answer bearing sentence $\lbrace p_1, p_2, .. p_m\rbrace $5 by replacing $\lbrace p_1, p_2, .. p_m\rbrace $6 with a placeholder. As a result, we obtain passage-question-answer ( $\lbrace p_1, p_2, .. p_m\rbrace $7 ) triples (Table 1 shows an example). As a post-processing step, we prune out $\lbrace p_1, p_2, .. p_m\rbrace $8 triples where the word overlap between the question (Q) and passage (P) is less than 2 words (after excluding the stop words). The process relies on the fact that answer candidates from the introduction are likely to be discussed in detail in the remainder of the article. In effect, the cloze question from the introduction and the matching paragraph in the body forms a question and context passage pair. We create two cloze datasets, one each from Wikipedia corpus (for SQuAD and TriviaQA) and PUBMed academic papers (for the BioASQ challenge), consisting of 2.2M and 1M clozes respectively. From analyzing the cloze data manually, we were able to answer 76% times for the Wikipedia set and 80% times for the PUBMed set using the information in the passage. In most cases the cloze paraphrased the information in the passage, which we hypothesized to be a useful signal for the downstream QA task. We also investigate the utility of forming subsets of the large cloze corpus, where we select the top passage-question-answer triples, based on the different criteria, like i) jaccard similarity of answer bearing sentence in introduction and the passage ii) the tf-idf scores of answer candidates and iii) the length of answer candidates. However, we empirically find that we were better off using the entire set rather than these subsets. Pre-training: We make use of the generated cloze dataset to pre-train an expressive neural network designed for the task of reading comprehension. We work with two publicly available neural network models – the GA Reader BIBREF2 (to enable comparison with prior work) and BiDAF + Self-Attention (SA) model from BIBREF1 (which is among the best performing models on SQuAD and TriviaQA). After pretraining, the performance of BiDAF+SA on a dev set of the (Wikipedia) cloze questions is 0.58 F1 score and 0.55 Exact Match (EM) score. This implies that the cloze corpus is neither too easy, nor too difficult to answer. Fine Tuning: We fine tune the pre-trained model, from the previous step, over a small set of labelled question-answer pairs. As we shall later see, this step is crucial, and it only requires a handful of labelled questions to achieve a significant proportion of the performance typically attained by training on tens of thousands of questions. Datasets We apply our system to three datasets from different domains. SQuAD BIBREF3 consists of questions whose answers are free form spans of text from passages in Wikipedia articles. We follow the same setting as in BIBREF0 , and split $10\%$ of training questions as the test set, and report performance when training on subsets of the remaining data ranging from $1\%$ to $90\%$ of the full set. We also report the performance on the dev set when trained on the full training set ( $1^\ast $ in Table 2 ). We use the same hyperparameter settings as in prior work. We compare and study four different settings: 1) the Supervised Learning (SL) setting, which is only trained on the supervised data, 2) the best performing GDAN model from BIBREF0 , 3) pretraining on a Language Modeling (LM) objective and fine-tuning on the supervised data, and 4) pretraining on the Cloze dataset and fine-tuning on the supervised data. The LM and Cloze methods use exactly the same data for pretraining, but differ in the loss functions used. We report F1 and EM scores on our test set using the official evaluation scripts provided by the authors of the dataset. TriviaQA BIBREF4 comprises of over 95K web question-answer-evidence triples. Like SQuAD, the answers are spans of text. Similar to the setting in SQuAD, we create multiple smaller subsets of the entire set. For our semi-supervised QA system, we use the BiDAF+SA model BIBREF1 – the highest performing publicly available system for TrivaQA. Here again, we compare the supervised learning (SL) settings against the pretraining on Cloze set and fine tuning on the supervised set. We report F1 and EM scores on the dev set. We also test on the BioASQ 5b dataset, which consists of question-answer pairs from PubMed abstracts. We use the publicly available system from BIBREF14 , and follow the exact same setup as theirs, focusing only on factoid and list questions. For this setting, there are only 899 questions for training. Since this is already a low-resource problem we only report results using 5-fold cross-validation on all the available data. We report Mean Reciprocal Rank (MRR) on the factoid questions, and F1 score for the list questions. Main Results Table 2 shows a comparison of the discussed settings on both SQuAD and TriviaQA. Without any fine-tuning (column 0) the performance is low, probably because the model never saw a real question, but we see significant gains with Cloze pretraining even with very little labeled data. The BiDAF+SA model, exceeds an F1 score of $50\%$ with only $1\%$ of the training data (454 questions for SQuAD, and 746 questions for TriviaQA), and approaches $90\%$ of the best performance with only $10\%$ labeled data. The gains over the SL setting, however, diminish as the size of the labeled set increases and are small when the full dataset is available. Cloze pretraining outperforms the GDAN baseline from BIBREF0 using the same SQuAD dataset splits. Additionally, we show improvements in the $90\%$ data case unlike GDAN. Our approach is also applicable in the extremely low-resource setting of $1\%$ data, which we suspect GDAN might have trouble with since it uses the labeled data to do reinforcement learning. Furthermore, we are able to use the same cloze dataset to improve performance on both SQuAD and TriviaQA datasets. When we use the same unlabeled data to pre-train with a language modeling objective, the performance is worse, showing the bias we introduce by constructing clozes is important. On the BioASQ dataset (Table 3 ) we again see a significant improvement when pretraining with the cloze questions over the supervised baseline. The improvement is smaller than what we observe with SQuAD and TriviaQA datasets – we believe this is because questions are generally more difficult in BioASQ. BIBREF14 showed that pretraining on SQuAD dataset improves the downstream performance on BioASQ. Here, we show a much larger improvement by pretraining on cloze questions constructed in an unsupervised manner from the same domain. Analysis Regression Analysis: To understand which types of questions benefit from pre-training, we pre-specified certain features (see Figure 1 right) for each of the dev set questions in SQuAD, and then performed linear regression to predict the F1 score for that question from these features. We predict the F1 scores from the cloze pretrained model ( $y^{\text{cloze}}$ ), the supervised model ( $y^{\text{sl}}$ ), and the difference of the two ( $y^{\text{cloze}}-y^{\text{sl}}$ ), when using $10\%$ of labeled data. The coefficients of the fitted model are shown in Figure 1 (left) along with their std errors. Positive coefficients indicate that a high value of that feature is predictive of a high F1 score, and a negative coefficient indicates that a small value of that feature is predictive of a high F1 score (or a high difference of F1 scores from the two models in the case of $y^{\text{cloze}}-y^{\text{sl}}$ ). The two strongest effects we observe are that a high lexical overlap between the question and the sentence containing the answer is indicative of high boost with pretraining, and that a high lexical overlap between the question and the whole passage is indicative of the opposite. This is hardly surprising, since our cloze construction process is biased towards questions which have a similar phrasing to the answer sentences in context. Hence, test questions with a similar property are answered correctly after pretraining, whereas those with a high overlap with the whole passage tend to have lower performance. The pretraining also favors questions with short answers because the cloze construction process produces short answer spans. Also passages and questions which consist of tokens infrequent in the SQuAD training corpus receive a large boost after pretraining, since the unlabeled data covers a larger domain. Performance on question types: Figure 2 shows the average gain in F1 score for different types of questions, when we pretrain on the clozes compared to the supervised case. This analysis is done on the $10\%$ split of the SQuAD training set. We consider two classifications of each question – one determined on the first word (usually a wh-word) of the question (Figure 2 (bottom)) and one based on the output of a separate question type classifier adapted from BIBREF21 . We use the coarse grain labels namely Abbreviation (ABBR), Entity (ENTY), Description (DESC), Human (HUM), Location (LOC), Numeric (NUM) trained on a Logistic Regression classification system . While there is an improvement across the board, we find that abbreviation questions in particular receive a large boost. Also, "why" questions show the least improvement, which is in line with our expectation, since these usually require reasoning or world knowledge which cloze questions rarely require. Conclusion In this paper, we show that pre-training QA models with automatically constructed cloze questions improves the performance of the models significantly, especially when there are few labeled examples. The performance of the model trained only on the cloze questions is poor, validating the need for fine-tuning. Through regression analysis, we find that pretraining helps with questions which ask for factual information located in a specific part of the context. For future work, we plan to explore the active learning setup for this task – specifically, which passages and / or types of questions can we select to annotate, such that there is a maximum performance gain from fine-tuning. We also want to explore how to adapt cloze style pre-training to NLP tasks other than QA. Acknowledgments Bhuwan Dhingra is supported by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google. Danish Pruthi and Dheeraj Rajagopal are supported by the DARPA Big Mechanism program under ARO contract W911NF-14-1-0436. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Is it possible to convert a cloze-style questions to a naturally-looking questions? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "Unanswerable" ]
276
3,826
single_doc_qa
c5fa846aaae44836acd2f0d5d7e13c86
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.<ref name="nytimes">Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008.</ref> Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives. Early life and education Born graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead. She then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the "Outstanding Senior" award and graduated as valedictorian of the class of 1964. Legal career Immediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz. Born's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice. Born was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first "Women and the Law" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench. During her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court. In 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated. In July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC). Born and the OTC derivatives market Born was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a "legal uncertainty" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would "stifle financial innovation" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies. In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, "I thought that LTCM was exactly what I had been worried about". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked "How many more failures do you think we'd have to have before some regulation in this area might be appropriate?" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that "the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: "Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did." Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999. The derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008. Born declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: "The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been." She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms. An October 2009 Frontline documentary titled "The Warning" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: "I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience." In 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the "political courage she demonstrated in sounding early warnings about conditions that contributed" to the 2007-08 financial crisis. According to Caroline Kennedy, "Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right." One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated "I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know", adding that "I could have done much better. I could have made a difference" in response to her warnings. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk. Personal life Born is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time. References External links Attorney profile at Arnold & Porter Brooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video Profile at MarketsWiki Speeches and statements "Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market", before the House Committee On Banking And Financial Services, July 24, 1998. "The Lessons of Long Term Capital Management L.P.", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998. Interview: Brooksley Born for "PBS Frontline: The Warning", PBS, (streaming VIDEO 1 hour), October 20, 2009. Articles Manuel Roig-Franzia. "Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On", The Washington Post, May 26, 2009 Taibbi, Matt. "The Great American Bubble Machine", Rolling Stone'', July 9–23, 2009 1940 births American women lawyers Arnold & Porter people Clinton administration personnel Columbus School of Law faculty Commodity Futures Trading Commission personnel Heads of United States federal agencies Lawyers from San Francisco Living people Stanford Law School alumni 21st-century American women Stanford University alumni Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What award did Brooksley Born receive in 2009? Only give me the answer and do not output any other words.
Answer:
[ "In 2009, Brooksley Born received the John F. Kennedy Profiles in Courage Award." ]
276
2,705
single_doc_qa
8a81147fa48f46f0b08f585cc83593cc
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Machine translation has made remarkable progress, and studies claiming it to reach a human parity are starting to appear BIBREF0. However, when evaluating translations of the whole documents rather than isolated sentences, human raters show a stronger preference for human over machine translation BIBREF1. These findings emphasize the need to shift towards context-aware machine translation both from modeling and evaluation perspective. Most previous work on context-aware NMT assumed that either all the bilingual data is available at the document level BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 or at least its fraction BIBREF11. But in practical scenarios, document-level parallel data is often scarce, which is one of the challenges when building a context-aware system. We introduce an approach to context-aware machine translation using only monolingual document-level data. In our setting, a separate monolingual sequence-to-sequence model (DocRepair) is used to correct sentence-level translations of adjacent sentences. The key idea is to use monolingual data to imitate typical inconsistencies between context-agnostic translations of isolated sentences. The DocRepair model is trained to map inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. To validate the performance of our model, we use three kinds of evaluation: the BLEU score, contrastive evaluation of translation of several discourse phenomena BIBREF11, and human evaluation. We show strong improvements for all metrics. We analyze which discourse phenomena are hard to capture using monolingual data only. Using contrastive test sets for targeted evaluation of several contextual phenomena, we compare the performance of the models trained on round-trip translations and genuine document-level parallel data. Among the four phenomena in the test sets we use (deixis, lexical cohesion, VP ellipsis and ellipsis which affects NP inflection) we find VP ellipsis to be the hardest phenomenon to be captured using round-trip translations. Our key contributions are as follows: we introduce the first approach to context-aware machine translation using only monolingual document-level data; our approach shows substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation; we show which discourse phenomena are hard to capture using monolingual data only. Our Approach: Document-level Repair We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations of a context-agnostic MT system. It does not use any states of a trained MT model whose outputs it corrects and therefore can in principle be trained to correct translations from any black-box MT system. The DocRepair model requires only monolingual document-level data in the target language. It is a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. Consistent groups come from monolingual document-level data. To obtain inconsistent groups, each sentence in a group is replaced with its round-trip translation produced in isolation from context. More formally, forming a training minibatch for the DocRepair model involves the following steps (see also Figure FIGREF9): sample several groups of sentences from the monolingual data; for each sentence in a group, (i) translate it using a target-to-source MT model, (ii) sample a translation of this back-translated sentence in the source language using a source-to-target MT model; using these round-trip translations of isolated sentences, form an inconsistent version of the initial groups; use inconsistent groups as input for the DocRepair model, consistent ones as output. At test time, the process of getting document-level translations is two-step (Figure FIGREF10): produce translations of isolated sentences using a context-agnostic MT model; apply the DocRepair model to a sequence of context-agnostic translations to correct inconsistencies between translations. In the scope of the current work, the DocRepair model is the standard sequence-to-sequence Transformer. Sentences in a group are concatenated using a reserved token-separator between sentences. The Transformer is trained to correct these long inconsistent pseudo-sentences into consistent ones. The token-separator is then removed from corrected translations. Evaluation of Contextual Phenomena We use contrastive test sets for evaluation of discourse phenomena for English-Russian by BIBREF11. These test sets allow for testing different kinds of phenomena which, as we show, can be captured from monolingual data with varying success. In this section, we provide test sets statistics and briefly describe the tested phenomena. For more details, the reader is referred to BIBREF11. Evaluation of Contextual Phenomena ::: Test sets There are four test sets in the suite. Each test set contains contrastive examples. It is specifically designed to test the ability of a system to adapt to contextual information and handle the phenomenon under consideration. Each test instance consists of a true example (a sequence of sentences and their reference translation from the data) and several contrastive translations which differ from the true one only in one specific aspect. All contrastive translations are correct and plausible translations at the sentence level, and only context reveals the inconsistencies between them. The system is asked to score each candidate translation, and we compute the system accuracy as the proportion of times the true translation is preferred to the contrastive ones. Test set statistics are shown in Table TABREF15. The suites for deixis and lexical cohesion are split into development and test sets, with 500 examples from each used for validation purposes and the rest for testing. Convergence of both consistency scores on these development sets and BLEU score on a general development set are used as early stopping criteria in models training. For ellipsis, there is no dedicated development set, so we evaluate on all the ellipsis data and do not use it for development. Evaluation of Contextual Phenomena ::: Phenomena overview Deixis Deictic words or phrases, are referential expressions whose denotation depends on context. This includes personal deixis (“I”, “you”), place deixis (“here”, “there”), and discourse deixis, where parts of the discourse are referenced (“that's a good question”). The test set examples are all related to person deixis, specifically the T-V distinction between informal and formal you (Latin “tu” and “vos”) in the Russian translations, and test for consistency in this respect. Ellipsis Ellipsis is the omission from a clause of one or more words that are nevertheless understood in the context of the remaining elements. In machine translation, elliptical constructions in the source language pose a problem in two situations. First, if the target language does not allow the same types of ellipsis, requiring the elided material to be predicted from context. Second, if the elided material affects the syntax of the sentence. For example, in Russian the grammatical function of a noun phrase, and thus its inflection, may depend on the elided verb, or, conversely, the verb inflection may depend on the elided subject. There are two different test sets for ellipsis. One contains examples where a morphological form of a noun group in the last sentence can not be understood without context beyond the sentence level (“ellipsis (infl.)” in Table TABREF15). Another includes cases of verb phrase ellipsis in English, which does not exist in Russian, thus requires predicting the verb when translating into Russian (“ellipsis (VP)” in Table TABREF15). Lexical cohesion The test set focuses on reiteration of named entities. Where several translations of a named entity are possible, a model has to prefer consistent translations over inconsistent ones. Experimental Setup ::: Data preprocessing We use the publicly available OpenSubtitles2018 corpus BIBREF12 for English and Russian. For a fair comparison with previous work, we train the baseline MT system on the data released by BIBREF11. Namely, our MT system is trained on 6m instances. These are sentence pairs with a relative time overlap of subtitle frames between source and target language subtitles of at least $0.9$. We gathered 30m groups of 4 consecutive sentences as our monolingual data. We used only documents not containing groups of sentences from general development and test sets as well as from contrastive test sets. The main results we report are for the model trained on all 30m fragments. We use the tokenization provided by the corpus and use multi-bleu.perl on lowercased data to compute BLEU score. We use beam search with a beam of 4. Sentences were encoded using byte-pair encoding BIBREF13, with source and target vocabularies of about 32000 tokens. Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 15000 source tokens. It has been shown that Transformer's performance depends heavily on batch size BIBREF14, and we chose a large batch size to ensure the best performance. In training context-aware models, for early stopping we use both convergence in BLEU score on the general development set and scores on the consistency development sets. After training, we average the 5 latest checkpoints. Experimental Setup ::: Models The baseline model, the model used for back-translation, and the DocRepair model are all Transformer base models BIBREF15. More precisely, the number of layers is $N=6$ with $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in BIBREF15. As a second baseline, we use the two-pass CADec model BIBREF11. The first pass produces sentence-level translations. The second pass takes both the first-pass translation and representations of the context sentences as input and returns contextualized translations. CADec requires document-level parallel training data, while DocRepair only needs monolingual training data. Experimental Setup ::: Generating round-trip translations On the selected 6m instances we train sentence-level translation models in both directions. To create training data for DocRepair, we proceed as follows. The Russian monolingual data is first translated into English, using the Russian$\rightarrow $English model and beam search with beam size of 4. Then, we use the English$\rightarrow $Russian model to sample translations with temperature of $0{.}5$. For each sentence, we precompute 20 sampled translations and randomly choose one of them when forming a training minibatch for DocRepair. Also, in training, we replace each token in the input with a random one with the probability of $10\%$. Experimental Setup ::: Optimizer As in BIBREF15, we use the Adam optimizer BIBREF16, the parameters are $\beta _1 = 0{.}9$, $\beta _2 = 0{.}98$ and $\varepsilon = 10^{-9}$. We vary the learning rate over the course of training using the formula: where $warmup\_steps = 16000$ and $scale=4$. Results ::: General results The BLEU scores are provided in Table TABREF24 (we evaluate translations of 4-sentence fragments). To see which part of the improvement is due to fixing agreement between sentences rather than simply sentence-level post-editing, we train the same repair model at the sentence level. Each sentence in a group is now corrected separately, then they are put back together in a group. One can see that most of the improvement comes from accounting for extra-sentential dependencies. DocRepair outperforms the baseline and CADec by 0.7 BLEU, and its sentence-level repair version by 0.5 BLEU. Results ::: Consistency results Scores on the phenomena test sets are provided in Table TABREF26. For deixis, lexical cohesion and ellipsis (infl.) we see substantial improvements over both the baseline and CADec. The largest improvement over CADec (22.5 percentage points) is for lexical cohesion. However, there is a drop of almost 5 percentage points for VP ellipsis. We hypothesize that this is because it is hard to learn to correct inconsistencies in translations caused by VP ellipsis relying on monolingual data alone. Figure FIGREF27(a) shows an example of inconsistency caused by VP ellipsis in English. There is no VP ellipsis in Russian, and when translating auxiliary “did” the model has to guess the main verb. Figure FIGREF27(b) shows steps of generating round-trip translations for the target side of the previous example. When translating from Russian, main verbs are unlikely to be translated as the auxiliary “do” in English, and hence the VP ellipsis is rarely present on the English side. This implies the model trained using the round-trip translations will not be exposed to many VP ellipsis examples in training. We discuss this further in Section SECREF34. Table TABREF28 provides scores for deixis and lexical cohesion separately for different distances between sentences requiring consistency. It can be seen, that the performance of DocRepair degrades less than that of CADec when the distance between sentences requiring consistency gets larger. Results ::: Human evaluation We conduct a human evaluation on random 700 examples from our general test set. We picked only examples where a DocRepair translation is not a full copy of the baseline one. The annotators were provided an original group of sentences in English and two translations: baseline context-agnostic one and the one corrected by the DocRepair model. Translations were presented in random order with no indication which model they came from. The task is to pick one of the three options: (1) the first translation is better, (2) the second translation is better, (3) the translations are of equal quality. The annotators were asked to avoid the third answer if they are able to give preference to one of the translations. No other guidelines were given. The results are provided in Table TABREF30. In about $52\%$ of the cases annotators marked translations as having equal quality. Among the cases where one of the translations was marked better than the other, the DocRepair translation was marked better in $73\%$ of the cases. This shows a strong preference of the annotators for corrected translations over the baseline ones. Varying Training Data In this section, we discuss the influence of the training data chosen for document-level models. In all experiments, we used the DocRepair model. Varying Training Data ::: The amount of training data Table TABREF33 provides BLEU and consistency scores for the DocRepair model trained on different amount of data. We see that even when using a dataset of moderate size (e.g., 5m fragments) we can achieve performance comparable to the model trained on a large amount of data (30m fragments). Moreover, we notice that deixis scores are less sensitive to the amount of training data than lexical cohesion and ellipsis scores. The reason might be that, as we observed in our previous work BIBREF11, inconsistencies in translations due to the presence of deictic words and phrases are more frequent in this dataset than other types of inconsistencies. Also, as we show in Section SECREF7, this is the phenomenon the model learns faster in training. Varying Training Data ::: One-way vs round-trip translations In this section, we discuss the limitations of using only monolingual data to model inconsistencies between sentence-level translations. In Section SECREF25 we observed a drop in performance on VP ellipsis for DocRepair compared to CADec, which was trained on parallel data. We hypothesized that this is due to the differences between one-way and round-trip translations, and now we test this hypothesis. To do so, we fix the dataset and vary the way in which the input for DocRepair is generated: round-trip or one-way translations. The latter assumes that document-level data is parallel, and translations are sampled from the source side of the sentences in a group rather than from their back-translations. For parallel data, we take 1.5m parallel instances which were used for CADec training and add 1m instances from our monolingual data. For segments in the parallel part, we either sample translations from the source side or use round-trip translations. The results are provided in Table TABREF35. The model trained on one-way translations is slightly better than the one trained on round-trip translations. As expected, VP ellipsis is the hardest phenomena to be captured using round-trip translations, and the DocRepair model trained on one-way translated data gains 6% accuracy on this test set. This shows that the DocRepair model benefits from having access to non-synthetic English data. This results in exposing DocRepair at training time to Russian translations which suffer from the same inconsistencies as the ones it will have to correct at test time. Varying Training Data ::: Filtering: monolingual (no filtering) or parallel Note that the scores of the DocRepair model trained on 2.5m instances randomly chosen from monolingual data (Table TABREF33) are different from the ones for the model trained on 2.5m instances combined from parallel and monolingual data (Table TABREF35). For convenience, we show these two in Table TABREF36. The domain, the dataset these two data samples were gathered from, and the way we generated training data for DocRepair (round-trip translations) are all the same. The only difference lies in how the data was filtered. For parallel data, as in the previous work BIBREF6, we picked only sentence pairs with large relative time overlap of subtitle frames between source-language and target-language subtitles. This is necessary to ensure the quality of translation data: one needs groups of consecutive sentences in the target language where every sentence has a reliable translation. Table TABREF36 shows that the quality of the model trained on data which came from the parallel part is worse than the one trained on monolingual data. This indicates that requiring each sentence in a group to have a reliable translation changes the distribution of the data, which might be not beneficial for translation quality and provides extra motivation for using monolingual data. Learning Dynamics Let us now look into how the process of DocRepair training progresses. Figure FIGREF38 shows how the BLEU scores with the reference translation and with the baseline context-agnostic translation (i.e. the input for the DocRepair model) are changing during training. First, the model quickly learns to copy baseline translations: the BLEU score with the baseline is very high. Then it gradually learns to change them, which leads to an improvement in BLEU with the reference translation and a drop in BLEU with the baseline. Importantly, the model is reluctant to make changes: the BLEU score between translations of the converged model and the baseline is 82.5. We count the number of changed sentences in every 4-sentence fragment in the test set and plot the histogram in Figure FIGREF38. In over than 20$\%$ of the cases the model has not changed base translations at all. In almost $40\%$, it modified only one sentence and left the remaining 3 sentences unchanged. The model changed more than half sentences in a group in only $14\%$ of the cases. Several examples of the DocRepair translations are shown in Figure FIGREF43. Figure FIGREF42 shows how consistency scores are changing in training. For deixis, the model achieves the final quality quite quickly; for the rest, it needs a large number of training steps to converge. Related Work Our work is most closely related to two lines of research: automatic post-editing (APE) and document-level machine translation. Related Work ::: Automatic post-editing Our model can be regarded as an automatic post-editing system – a system designed to fix systematic MT errors that is decoupled from the main MT system. Automatic post-editing has a long history, including rule-based BIBREF17, statistical BIBREF18 and neural approaches BIBREF19, BIBREF20, BIBREF21. In terms of architectures, modern approaches use neural sequence-to-sequence models, either multi-source architectures that consider both the original source and the baseline translation BIBREF19, BIBREF20, or monolingual repair systems, as in BIBREF21, which is concurrent work to ours. True post-editing datasets are typically small and expensive to create BIBREF22, hence synthetic training data has been created that uses original monolingual data as output for the sequence-to-sequence model, paired with an automatic back-translation BIBREF23 and/or round-trip translation as its input(s) BIBREF19, BIBREF21. While previous work on automatic post-editing operated on the sentence level, the main novelty of this work is that our DocRepair model operates on groups of sentences and is thus able to fix consistency errors caused by the context-agnostic baseline MT system. We consider this strategy of sentence-level baseline translation and context-aware monolingual repair attractive when parallel document-level data is scarce. For training, the DocRepair model only requires monolingual document-level data. While we create synthetic training data via round-trip translation similarly to earlier work BIBREF19, BIBREF21, note that we purposefully use sentence-level MT systems for this to create the types of consistency errors that we aim to fix with the context-aware DocRepair model. Not all types of consistency errors that we want to fix emerge from a round-trip translation, so access to parallel document-level data can be useful (Section SECREF34). Related Work ::: Document-level NMT Neural models of MT that go beyond the sentence-level are an active research area BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF10, BIBREF9, BIBREF11. Typically, the main MT system is modified to take additional context as its input. One limitation of these approaches is that they assume that parallel document-level training data is available. Closest to our work are two-pass models for document-level NMT BIBREF24, BIBREF11, where a second, context-aware model takes the translation and hidden representations of the sentence-level first-pass model as its input. The second-pass model can in principle be trained on a subset of the parallel training data BIBREF11, somewhat relaxing the assumption that all training data is at the document level. Our work is different from this previous work in two main respects. Firstly, we show that consistency can be improved with only monolingual document-level training data. Secondly, the DocRepair model is decoupled from the first-pass MT system, which improves its portability. Conclusions We introduce the first approach to context-aware machine translation using only monolingual document-level data. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. The model performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. Our approach results in substantial improvements in translation quality as measured by BLEU, targeted contrastive evaluation of several discourse phenomena and human evaluation. Moreover, we perform error analysis and detect which discourse phenomena are hard to capture using only monolingual document-level data. While in the current work we used text fragments of 4 sentences, in future work we would like to consider longer contexts. Acknowledgments We would like to thank the anonymous reviewers for their comments. The authors also thank David Talbot and Yandex Machine Translation team for helpful discussions and inspiration. Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518). Rico Sennrich acknowledges support from the Swiss National Science Foundation (105212_169888), the European Union’s Horizon 2020 research and innovation programme (grant agreement no 825460), and the Royal Society (NAF\R1\180122). Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: how many humans evaluated the results? Only give me the answer and do not output any other words.
Answer:
[ "Unanswerable", "Unanswerable" ]
276
4,992
single_doc_qa
13c62ac8fb454bf2ab04a03c4663f23b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The automatic processing of medical texts and documents plays an increasingly important role in the recent development of the digital health area. To enable dedicated Natural Language Processing (NLP) that is highly accurate with respect to medically relevant categories, manually annotated data from this domain is needed. One category of high interest and relevance are medical entities. Only very few annotated corpora in the medical domain exist. Many of them focus on the relation between chemicals and diseases or proteins and diseases, such as the BC5CDR corpus BIBREF0, the Comparative Toxicogenomics Database BIBREF1, the FSU PRotein GEne corpus BIBREF2 or the ADE (adverse drug effect) corpus BIBREF3. The NCBI Disease Corpus BIBREF4 contains condition mention annotations along with annotations of symptoms. Several new corpora of annotated case reports were made available recently. grouin-etal-2019-clinical presented a corpus with medical entity annotations of clinical cases written in French, copdPhenotype presented a corpus focusing on phenotypic information for chronic obstructive pulmonary disease while 10.1093/database/bay143 presented a corpus focusing on identifying main finding sentences in case reports. The corpus most comparable to ours is the French corpus of clinical case reports by grouin-etal-2019-clinical. Their annotations are based on UMLS semantic types. Even though there is an overlap in annotated entities, semantic classes are not the same. Lab results are subsumed under findings in our corpus and are not annotated as their own class. Factors extend beyond gender and age and describe any kind of risk factor that contributes to a higher probability of having a certain disease. Our corpus includes additional entity types. We annotate conditions, findings (including medical findings such as blood values), factors, and also modifiers which indicate the negation of other entities as well as case entities, i. e., entities specific to one case report. An overview is available in Table TABREF3. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation tasks Case reports are standardized in the CARE guidelines BIBREF5. They represent a detailed description of the symptoms, signs, diagnosis, treatment, and follow-up of an individual patient. We focus on documents freely available through PubMed Central (PMC). The presentation of the patient's case can usually be found in a dedicated section or the abstract. We perform a manual annotation of all mentions of case entities, conditions, findings, factors and modifiers. The scope of our manual annotation is limited to the presentation of a patient's signs and symptoms. In addition, we annotate the title of the case report. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Guidelines We annotate the following entities: case entity marks the mention of a patient. A case report can contain more than one case description. Therefore, all the findings, factors and conditions related to one patient are linked to the respective case entity. Within the text, this entity is often represented by the first mention of the patient and overlaps with the factor annotations which can, e. g., mark sex and age (cf. Figure FIGREF12). condition marks a medical disease such as pneumothorax or dislocation of the shoulder. factor marks a feature of a patient which might influence the probability for a specific diagnosis. It can be immutable (e. g., sex and age), describe a specific medical history (e. g., diabetes mellitus) or a behaviour (e. g., smoking). finding marks a sign or symptom a patient shows. This can be visible (e. g., rash), described by a patient (e. g., headache) or measurable (e. g., decreased blood glucose level). negation modifier explicitly negate the presence of a certain finding usually setting the case apart from common cases. We also annotate relations between these entities, where applicable. Since we work on case descriptions, the anchor point of these relations is the case that is described. The following relations are annotated: has relations exist between a case entity and factor, finding or condition entities. modifies relations exist between negation modifiers and findings. causes relations exist between conditions and findings. Example annotations are shown in Figure FIGREF16. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotators We asked medical doctors experienced in extracting knowledge related to medical entities from texts to annotate the entities described above. Initially, we asked four annotators to test our guidelines on two texts. Subsequently, identified issues were discussed and resolved. Following this pilot annotation phase, we asked two different annotators to annotate two case reports according to our guidelines. The same annotators annotated an overall collection of 53 case reports. Inter-annotator agreement is calculated based on two case reports. We reach a Cohen's kappa BIBREF6 of 0.68. Disagreements mainly appear for findings that are rather unspecific such as She no longer eats out with friends which can be seen as a finding referring to “avoidance behaviour”. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Annotation Tools and Format The annotation was performed using WebAnno BIBREF7, a web-based tool for linguistic annotation. The annotators could choose between a pre-annotated version or a blank version of each text. The pre-annotated versions contained suggested entity spans based on string matches from lists of conditions and findings synonym lists. Their quality varied widely throughout the corpus. The blank version was preferred by the annotators. We distribute the corpus in BioC JSON format. BioC was chosen as it allows us to capture the complexities of the annotations in the biomedical domain. It represented each documents properties ranging from full text, individual passages/sentences along with captured annotations and relationships in an organized manner. BioC is based on character offsets of annotations and allows the stacking of different layers. A Corpus of Medical Case Reports with Medical Entity Annotation ::: Corpus Overview The corpus consists of 53 documents, which contain an average number of 156.1 sentences per document, each with 19.55 tokens on average. The corpus comprises 8,275 sentences and 167,739 words in total. However, as mentioned above, only case presentation sections, headings and abstracts are annotated. The numbers of annotated entities are summarized in Table TABREF24. Findings are the most frequently annotated type of entity. This makes sense given that findings paint a clinical picture of the patient's condition. The number of tokens per entity ranges from one token for all types to 5 tokens for cases (average length 3.1), nine tokens for conditions (average length 2.0), 16 tokens for factors (average length 2.5), 25 tokens for findings (average length 2.6) and 18 tokens for modifiers (average length 1.4) (cf. Table TABREF24). Examples of rather long entities are given in Table TABREF25. Entities can appear in a discontinuous way. We model this as a relation between two spans which we call “discontinuous” (cf. Figure FIGREF26). Especially findings often appear as discontinuous entities, we found 543 discontinuous finding relations. The numbers for conditions and factors are lower with seven and two, respectively. Entities can also be nested within one another. This happens either when the span of one annotation is completely embedded in the span of another annotation (fully-nested; cf. Figure FIGREF12), or when there is a partial overlapping between the spans of two different entities (partially-nested; cf. Figure FIGREF12). There is a high number of inter-sentential relations in the corpus (cf. Table TABREF27). This can be explained by the fact that the case entity occurs early in each document; furthermore, it is related to finding and factor annotations that are distributed across different sentences. The most frequently annotated relation in our corpus is the has-relation between a case entity and the findings related to that case. This correlates with the high number of finding entities. The relations contained in our corpus are summarized in Table TABREF27. Baseline systems for Named Entity Recognition in medical case reports We evaluate the corpus using Named Entity Recognition (NER), i. e., the task of finding mentions of concepts of interest in unstructured text. We focus on detecting cases, conditions, factors, findings and modifiers in case reports (cf. Section SECREF6). We approach this as a sequence labeling problem. Four systems were developed to offer comparable robust baselines. The original documents are pre-processed (sentence splitting and tokenization with ScispaCy). We do not perform stop word removal or lower-casing of the tokens. The BIO labeling scheme is used to capture the order of tokens belonging to the same entity type and enable span-level detection of entities. Detection of nested and/or discontinuous entities is not supported. The annotated corpus is randomized and split in five folds using scikit-learn BIBREF9. Each fold has a train, test and dev split with the test split defined as .15% of the train split. This ensures comparability between the presented systems. Baseline systems for Named Entity Recognition in medical case reports ::: Conditional Random Fields Conditional Random Fields (CRF) BIBREF10 are a standard approach when dealing with sequential data in the context of sequence labeling. We use a combination of linguistic and semantic features, with a context window of size five, to describe each of the tokens and the dependencies between them. Hyper-parameter optimization is performed using randomized search and cross validation. Span-based F1 score is used as the optimization metric. Baseline systems for Named Entity Recognition in medical case reports ::: BiLSTM-CRF Prior to the emergence of deep neural language models, BiLSTM-CRF models BIBREF11 had achieved state-of-the-art results for the task of sequence labeling. We use a BiLSTM-CRF model with both word-level and character-level input. BioWordVec BIBREF12 pre-trained word embeddings are used in the embedding layer for the input representation. A bidirectional LSTM layer is applied to a multiplication of the two input representations. Finally, a CRF layer is applied to predict the sequence of labels. Dropout and L1/L2 regularization is used where applicable. He (uniform) initialization BIBREF13 is used to initialize the kernels of the individual layers. As the loss metric, CRF-based loss is used, while optimizing the model based on the CRF Viterbi accuracy. Additionally, span-based F1 score is used to serialize the best performing model. We train for a maximum of 100 epochs, or until an early stopping criterion is reached (no change in validation loss value grater than 0.01 for ten consecutive epochs). Furthermore, Adam BIBREF14 is used as the optimizer. The learning rate is reduced by a factor of 0.3 in case no significant increase of the optimization metric is achieved in three consecutive epochs. Baseline systems for Named Entity Recognition in medical case reports ::: Multi-Task Learning Multi-Task Learning (MTL) BIBREF15 has become popular with the progress in deep learning. This model family is characterized by simultaneous optimization of multiple loss functions and transfer of knowledge achieved this way. The knowledge is transferred through the use of one or multiple shared layers. Through finding supporting patterns in related tasks, MTL provides better generalization on unseen cases and the main tasks we are trying to solve. We rely on the model presented by bekoulis2018joint and reuse the implementation provided by the authors. The model jointly trains two objectives supported by the dataset: the main task of NER and a supporting task of Relation Extraction (RE). Two separate models are developed for each of the tasks. The NER task is solved with the help of a BiLSTM-CRF model, similar to the one presented in Section SECREF32 The RE task is solved by using a multi-head selection approach, where each token can have none or more relationships to in-sentence tokens. Additionally, this model also leverages the output of the NER branch model (the CRF prediction) to learn label embeddings. Shared layers consist of a concatenation of word and character embeddings followed by two bidirectional LSTM layers. We keep most of the parameters suggested by the authors and change (1) the number of training epochs to 100 to allow the comparison to other deep learning approaches in this work, (2) use label embeddings of size 64, (3) allow gradient clipping and (4) use $d=0.8$ as the pre-trained word embedding dropout and $d=0.5$ for all other dropouts. $\eta =1^{-3}$ is used as the learning rate with the Adam optimizer and tanh activation functions across layers. Although it is possible to use adversarial training BIBREF16, we omit from using it. We also omit the publication of results for the task of RE as we consider it to be a supporting task and no other competing approaches have been developed. Baseline systems for Named Entity Recognition in medical case reports ::: BioBERT Deep neural language models have recently evolved to a successful method for representing text. In particular, Bidirectional Encoder Representations from Transformers (BERT) outperformed previous state-of-the-art methods by a large margin on various NLP tasks BIBREF17. For our experiments, we use BioBERT, an adaptation of BERT for the biomedical domain, pre-trained on PubMed abstracts and PMC full-text articles BIBREF18. The BERT architecture for deriving text representations uses 12 hidden layers, consisting of 768 units each. For NER, token level BIO-tag probabilities are computed with a single output layer based on the representations from the last layer of BERT. We fine-tune the model on the entity recognition task during four training epochs with batch size $b=32$, dropout probability $d=0.1$ and learning rate $\eta =2^{-5}$. These hyper-parameters are proposed by Devlin2018 for BERT fine-tuning. Baseline systems for Named Entity Recognition in medical case reports ::: Evaluation To evaluate the performance of the four systems, we calculate the span-level precision (P), recall (R) and F1 scores, along with corresponding micro and macro scores. The reported values are shown in Table TABREF29 and are averaged over five folds, utilising the seqeval framework. With a macro avg. F1-score of 0.59, MTL achieves the best result with a significant margin compared to CRF, BiLSTM-CRF and BERT. This confirms the usefulness of jointly training multiple objectives (minimizing multiple loss functions), and enabling knowledge transfer, especially in a setting with limited data (which is usually the case in the biomedical NLP domain). This result also suggest the usefulness of BioBERT for other biomedical datasets as reported by Lee2019. Despite being a rather standard approach, CRF outperforms the more elaborated BiLSTM-CRF, presumably due to data scarcity and class imbalance. We hypothesize that an increase in training data would yield better results for BiLSTM-CRF but not outperform transfer learning approach of MTL (or even BioBERT). In contrast to other common NER corpora, like CoNLL 2003, even the best baseline system only achieves relatively low scores. This outcome is due to the inherent difficulty of the task (annotators are experienced medical doctors) and the small number of training samples. Conclusion We present a new corpus, developed to facilitate the processing of case reports. The corpus focuses on five distinct entity types: cases, conditions, factors, findings and modifiers. Where applicable, relationships between entities are also annotated. Additionally, we annotate discontinuous entities with a special relationship type (discontinuous). The corpus presented in this paper is the very first of its kind and a valuable addition to the scarce number of corpora available in the field of biomedical NLP. Its complexity, given the discontinuous nature of entities and a high number of nested and multi-label entities, poses new challenges for NLP methods applied for NER and can, hence, be a valuable source for insights into what entities “look like in the wild”. Moreover, it can serve as a playground for new modelling techniques such as the resolution of discontinuous entities as well as multi-task learning given the combination of entities and their relations. We provide an evaluation of four distinct NER systems that will serve as robust baselines for future work but which are, as of yet, unable to solve all the complex challenges this dataset holds. A functional service based on the presented corpus is currently being integrated, as a NER service, in the QURATOR platform BIBREF20. Acknowledgments The research presented in this article is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A), see http://qurator.ai. We want to thank our medical experts for their help annotating the data set, especially Ashlee Finckh and Sophie Klopfenstein. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How large is the corpus? Only give me the answer and do not output any other words.
Answer:
[ "8,275 sentences and 167,739 words in total", "The corpus comprises 8,275 sentences and 167,739 words in total." ]
276
3,500
single_doc_qa
358fdea8c7984260846006c50d01b3ef