Dataset Viewer
Auto-converted to Parquet
context
stringlengths
1.67k
46.5k
question
stringlengths
87
854
answer_prefix
stringclasses
5 values
answers
listlengths
1
3
max_new_tokens
int64
52
532
length
int64
360
8.19k
task
stringclasses
8 values
_id
stringlengths
32
32
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: 10pt 1.10pt [ Characterizing Political Fake News in Twitter by its Meta-DataJulio Amador Díaz LópezAxel Oehmichen Miguel Molina-Solana( j.amador, axelfrancois.oehmichen11, [email protected] ) Imperial College London This article presents a preliminary approach towards characterizing political fake news on Twitter through the analysis of their meta-data. In particular, we focus on more than 1.5M tweets collected on the day of the election of Donald Trump as 45th president of the United States of America. We use the meta-data embedded within those tweets in order to look for differences between tweets containing fake news and tweets not containing them. Specifically, we perform our analysis only on tweets that went viral, by studying proxies for users' exposure to the tweets, by characterizing accounts spreading fake news, and by looking at their polarization. We found significant differences on the distribution of followers, the number of URLs on tweets, and the verification of the users. ] Introduction While fake news, understood as deliberately misleading pieces of information, have existed since long ago (e.g. it is not unusual to receive news falsely claiming the death of a celebrity), the term reached the mainstream, particularly so in politics, during the 2016 presidential election in the United States BIBREF0 . Since then, governments and corporations alike (e.g. Google BIBREF1 and Facebook BIBREF2 ) have begun efforts to tackle fake news as they can affect political decisions BIBREF3 . Yet, the ability to define, identify and stop fake news from spreading is limited. Since the Obama campaign in 2008, social media has been pervasive in the political arena in the United States. Studies report that up to 62% of American adults receive their news from social media BIBREF4 . The wide use of platforms such as Twitter and Facebook has facilitated the diffusion of fake news by simplifying the process of receiving content with no significant third party filtering, fact-checking or editorial judgement. Such characteristics make these platforms suitable means for sharing news that, disguised as legit ones, try to confuse readers. Such use and their prominent rise has been confirmed by Craig Silverman, a Canadian journalist who is a prominent figure on fake news BIBREF5 : “In the final three months of the US presidential campaign, the top-performing fake election news stories on Facebook generated more engagement than the top stories from major news outlet”. Our current research hence departs from the assumption that social media is a conduit for fake news and asks the question of whether fake news (as spam was some years ago) can be identified, modelled and eventually blocked. In order to do so, we use a sample of more that 1.5M tweets collected on November 8th 2016 —election day in the United States— with the goal of identifying features that tweets containing fake news are likely to have. As such, our paper aims to provide a preliminary characterization of fake news in Twitter by looking into meta-data embedded in tweets. Considering meta-data as a relevant factor of analysis is in line with findings reported by Morris et al. BIBREF6 . We argue that understanding differences between tweets containing fake news and regular tweets will allow researchers to design mechanisms to block fake news in Twitter. Specifically, our goals are: 1) compare the characteristics of tweets labelled as containing fake news to tweets labelled as not containing them, 2) characterize, through their meta-data, viral tweets containing fake news and the accounts from which they originated, and 3) determine the extent to which tweets containing fake news expressed polarized political views. For our study, we used the number of retweets to single-out those that went viral within our sample. Tweets within that subset (viral tweets hereafter) are varied and relate to different topics. We consider that a tweet contains fake news if its text falls within any of the following categories described by Rubin et al. BIBREF7 (see next section for the details of such categories): serious fabrication, large-scale hoaxes, jokes taken at face value, slanted reporting of real facts and stories where the truth is contentious. The dataset BIBREF8 , manually labelled by an expert, has been publicly released and is available to researchers and interested parties. From our results, the following main observations can be made: Our findings resonate with similar work done on fake news such as the one from Allcot and Gentzkow BIBREF9 . Therefore, even if our study is a preliminary attempt at characterizing fake news on Twitter using only their meta-data, our results provide external validity to previous research. Moreover, our work not only stresses the importance of using meta-data, but also underscores which parameters may be useful to identify fake news on Twitter. The rest of the paper is organized as follows. The next section briefly discusses where this work is located within the literature on fake news and contextualizes the type of fake news we are studying. Then, we present our hypotheses, the data, and the methodology we follow. Finally, we present our findings, conclusions of this study, and future lines of work. Defining Fake news Our research is connected to different strands of academic knowledge related to the phenomenon of fake news. In relation to Computer Science, a recent survey by Conroy and colleagues BIBREF10 identifies two popular approaches to single-out fake news. On the one hand, the authors pointed to linguistic approaches consisting in using text, its linguistic characteristics and machine learning techniques to automatically flag fake news. On the other, these researchers underscored the use of network approaches, which make use of network characteristics and meta-data, to identify fake news. With respect to social sciences, efforts from psychology, political science and sociology, have been dedicated to understand why people consume and/or believe misinformation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 . Most of these studies consistently reported that psychological biases such as priming effects and confirmation bias play an important role in people ability to discern misinformation. In relation to the production and distribution of fake news, a recent paper in the field of Economics BIBREF9 found that most fake news sites use names that resemble those of legitimate organizations, and that sites supplying fake news tend to be short-lived. These authors also noticed that fake news items are more likely shared than legitimate articles coming from trusted sources, and they tend to exhibit a larger level of polarization. The conceptual issue of how to define fake news is a serious and unresolved issue. As the focus of our work is not attempting to offer light on this, we will rely on work by other authors to describe what we consider as fake news. In particular, we use the categorization provided by Rubin et al. BIBREF7 . The five categories they described, together with illustrative examples from our dataset, are as follows: Research Hypotheses Previous works on the area (presented in the section above) suggest that there may be important determinants for the adoption and diffusion of fake news. Our hypotheses builds on them and identifies three important dimensions that may help distinguishing fake news from legit information: Taking those three dimensions into account, we propose the following hypotheses about the features that we believe can help to identify tweets containing fake news from those not containing them. They will be later tested over our collected dataset. Exposure. Characterization. Polarization. Data and Methodology For this study, we collected publicly available tweets using Twitter's public API. Given the nature of the data, it is important to emphasize that such tweets are subject to Twitter's terms and conditions which indicate that users consent to the collection, transfer, manipulation, storage, and disclosure of data. Therefore, we do not expect ethical, legal, or social implications from the usage of the tweets. Our data was collected using search terms related to the presidential election held in the United States on November 8th 2016. Particularly, we queried Twitter's streaming API, more precisely the filter endpoint of the streaming API, using the following hashtags and user handles: #MyVote2016, #ElectionDay, #electionnight, @realDonaldTrump and @HillaryClinton. The data collection ran for just one day (Nov 8th 2016). One straightforward way of sharing information on Twitter is by using the retweet functionality, which enables a user to share a exact copy of a tweet with his followers. Among the reasons for retweeting, Body et al. BIBREF15 reported the will to: 1) spread tweets to a new audience, 2) to show one’s role as a listener, and 3) to agree with someone or validate the thoughts of others. As indicated, our initial interest is to characterize tweets containing fake news that went viral (as they are the most harmful ones, as they reach a wider audience), and understand how it differs from other viral tweets (that do not contain fake news). For our study, we consider that a tweet went viral if it was retweeted more than 1000 times. Once we have the dataset of viral tweets, we eliminated duplicates (some of the tweets were collected several times because they had several handles) and an expert manually inspected the text field within the tweets to label them as containing fake news, or not containing them (according to the characterization presented before). This annotated dataset BIBREF8 is publicly available and can be freely reused. Finally, we use the following fields within tweets (from the ones returned by Twitter's API) to compare their distributions and look for differences between viral tweets containing fake news and viral tweets not containing fake news: In the following section, we provide graphical descriptions of the distribution of each of the identified attributes for the two sets of tweets (those labelled as containing fake news and those labelled as not containing them). Where appropriate, we normalized and/or took logarithms of the data for better representation. To gain a better understanding of the significance of those differences, we use the Kolmogorov-Smirnov test with the null hypothesis that both distributions are equal. Results The sample collected consisted on 1 785 855 tweets published by 848 196 different users. Within our sample, we identified 1327 tweets that went viral (retweeted more than 1000 times by the 8th of November 2016) produced by 643 users. Such small subset of viral tweets were retweeted on 290 841 occasions in the observed time-window. The 1327 `viral' tweets were manually annotated as containing fake news or not. The annotation was carried out by a single person in order to obtain a consistent annotation throughout the dataset. Out of those 1327 tweets, we identified 136 as potentially containing fake news (according to the categories previously described), and the rest were classified as `non containing fake news'. Note that the categorization is far from being perfect given the ambiguity of fake news themselves and human judgement involved in the process of categorization. Because of this, we do not claim that this dataset can be considered a ground truth. The following results detail characteristics of these tweets along the previously mentioned dimensions. Table TABREF23 reports the actual differences (together with their associated p-values) of the distributions of viral tweets containing fake news and viral tweets not containing them for every variable considered. Exposure Figure FIGREF24 shows that, in contrast to other kinds of viral tweets, those containing fake news were created more recently. As such, Twitter users were exposed to fake news related to the election for a shorter period of time. However, in terms of retweets, Figure FIGREF25 shows no apparent difference between containing fake news or not containing them. That is confirmed by the Kolmogorov-Smirnoff test, which does not discard the hypothesis that the associated distributions are equal. In relation to the number of favourites, users that generated at least a viral tweet containing fake news appear to have, on average, less favourites than users that do not generate them. Figure FIGREF26 shows the distribution of favourites. Despite the apparent visual differences, the difference are not statistically significant. Finally, the number of hashtags used in viral fake news appears to be larger than those in other viral tweets. Figure FIGREF27 shows the density distribution of the number of hashtags used. However, once again, we were not able to find any statistical difference between the average number of hashtags in a viral tweet and the average number of hashtags in viral fake news. Characterization We found that 82 users within our sample were spreading fake news (i.e. they produced at least one tweet which was labelled as fake news). Out of those, 34 had verified accounts, and the rest were unverified. From the 48 unverified accounts, 6 have been suspended by Twitter at the date of writing, 3 tried to imitate legitimate accounts of others, and 4 accounts have been already deleted. Figure FIGREF28 shows the proportion of verified accounts to unverified accounts for viral tweets (containing fake news vs. not containing fake news). From the chart, it is clear that there is a higher chance of fake news coming from unverified accounts. Turning to friends, accounts distributing fake news appear to have, on average, the same number of friends than those distributing tweets with no fake news. However, the density distribution of friends from the accounts (Figure FIGREF29 ) shows that there is indeed a statistically significant difference in their distributions. If we take into consideration the number of followers, accounts generating viral tweets with fake news do have a very different distribution on this dimension, compared to those accounts generating viral tweets with no fake news (see Figure FIGREF30 ). In fact, such differences are statistically significant. A useful representation for friends and followers is the ratio between friends/followers. Figures FIGREF31 and FIGREF32 show this representation. Notice that accounts spreading viral tweets with fake news have, on average, a larger ratio of friends/followers. The distribution of those accounts not generating fake news is more evenly distributed. With respect to the number of mentions, Figure FIGREF33 shows that viral tweets labelled as containing fake news appear to use mentions to other users less frequently than viral tweets not containing fake news. In other words, tweets containing fake news mostly contain 1 mention, whereas other tweets tend to have two). Such differences are statistically significant. The analysis (Figure FIGREF34 ) of the presence of media in the tweets in our dataset shows that tweets labelled as not containing fake news appear to present more media elements than those labelled as fake news. However, the difference is not statistically significant. On the other hand, Figure FIGREF35 shows that viral tweets containing fake news appear to include more URLs to other sites than viral tweets that do not contain fake news. In fact, the difference between the two distributions is statistically significant (assuming INLINEFORM0 ). Polarization Finally, manual inspection of the text field of those viral tweets labelled as containing fake news shows that 117 of such tweets expressed support for Donald Trump, while only 8 supported Hillary Clinton. The remaining tweets contained fake news related to other topics, not expressing support for any of the candidates. Discussion As a summary, and constrained by our existing dataset, we made the following observations regarding differences between viral tweets labelled as containing fake news and viral tweets labelled as not containing them: These findings (related to our initial hypothesis in Table TABREF44 ) clearly suggest that there are specific pieces of meta-data about tweets that may allow the identification of fake news. One such parameter is the time of exposure. Viral tweets containing fake news are shorter-lived than those containing other type of content. This notion seems to resonate with our findings showing that a number of accounts spreading fake news have already been deleted or suspended by Twitter by the time of writing. If one considers that researchers using different data have found similar results BIBREF9 , it appears that the lifetime of accounts, together with the age of the questioned viral content could be useful to identify fake news. In the light of this finding, accounts newly created should probably put under higher scrutiny than older ones. This in fact, would be a nice a-priori bias for a Bayesian classifier. Accounts spreading fake news appear to have a larger proportion of friends/followers (i.e. they have, on average, the same number of friends but a smaller number of followers) than those spreading viral content only. Together with the fact that, on average, tweets containing fake news have more URLs than those spreading viral content, it is possible to hypothesize that, both, the ratio of friends/followers of the account producing a viral tweet and number of URLs contained in such a tweet could be useful to single-out fake news in Twitter. Not only that, but our finding related to the number of URLs is in line with intuitions behind the incentives to create fake news commonly found in the literature BIBREF9 (in particular that of obtaining revenue through click-through advertising). Finally, it is interesting to notice that the content of viral fake news was highly polarized. This finding is also in line with those of Alcott et al. BIBREF9 . This feature suggests that textual sentiment analysis of the content of tweets (as most researchers do), together with the above mentioned parameters from meta-data, may prove useful for identifying fake news. Conclusions With the election of Donald Trump as President of the United States, the concept of fake news has become a broadly-known phenomenon that is getting tremendous attention from governments and media companies. We have presented a preliminary study on the meta-data of a publicly available dataset of tweets that became viral during the day of the 2016 US presidential election. Our aim is to advance the understanding of which features might be characteristic of viral tweets containing fake news in comparison with viral tweets without fake news. We believe that the only way to automatically identify those deceitful tweets (i.e. containing fake news) is by actually understanding and modelling them. Only then, the automation of the processes of tagging and blocking these tweets can be successfully performed. In the same way that spam was fought, we anticipate fake news will suffer a similar evolution, with social platforms implementing tools to deal with them. With most works so far focusing on the actual content of the tweets, ours is a novel attempt from a different, but also complementary, angle. Within the used dataset, we found there are differences around exposure, characteristics of accounts spreading fake news and the tone of the content. Those findings suggest that it is indeed possible to model and automatically detect fake news. We plan to replicate and validate our experiments in an extended sample of tweets (until 4 months after the US election), and tests the predictive power of the features we found relevant within our sample. Author Disclosure Statement No competing financial interest exist. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What is their definition of tweets going viral? Only give me the answer and do not output any other words.
Answer:
[ "Viral tweets are the ones that are retweeted more than 1000 times", "those that contain a high number of retweets" ]
276
3,837
single_doc_qa
d135191a848646b583fcfc84091f5554
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction There have been many implementations of the word2vec model in either of the two architectures it provides: continuous skipgram and CBoW (BIBREF0). Similar distributed models of word or subword embeddings (or vector representations) find usage in sota, deep neural networks like BERT and its successors (BIBREF1, BIBREF2, BIBREF3). These deep networks generate contextual representations of words after been trained for extended periods on large corpora, unsupervised, using the attention mechanisms (BIBREF4). It has been observed that various hyper-parameter combinations have been used in different research involving word2vec with the possibility of many of them being sub-optimal (BIBREF5, BIBREF6, BIBREF7). Therefore, the authors seek to address the research question: what is the optimal combination of word2vec hyper-parameters for intrinsic and extrinsic NLP purposes? There are astronomically high numbers of combinations of hyper-parameters possible for neural networks, even with just a few layers. Hence, the scope of our extensive work over three corpora is on dimension size, training epochs, window size and vocabulary size for the training algorithms (hierarchical softmax and negative sampling) of both skipgram and CBoW. The corpora used for word embeddings are English Wiki News Abstract by BIBREF8 of about 15MB, English Wiki Simple (SW) Articles by BIBREF9 of about 711MB and the Billion Word (BW) of 3.9GB by BIBREF10. The corpus used for sentiment analysis is the IMDb dataset of movie reviews by BIBREF11 while that for NER is Groningen Meaning Bank (GMB) by BIBREF12, containing 47,959 sentence samples. The IMDb dataset used has a total of 25,000 sentences with half being positive sentiments and the other half being negative sentiments. The GMB dataset has 17 labels, with 9 main labels and 2 context tags. It is however unbalanced due to the high percentage of tokens with the label 'O'. This skew in the GMB dataset is typical with NER datasets. The objective of this work is to determine the optimal combinations of word2vec hyper-parameters for intrinsic evaluation (semantic and syntactic analogies) and extrinsic evaluation tasks (BIBREF13, BIBREF14), like SA and NER. It is not our objective in this work to record sota results. Some of the main contributions of this research are the empirical establishment of optimal combinations of word2vec hyper-parameters for NLP tasks, discovering the behaviour of quality of vectors viz-a-viz increasing dimensions and the confirmation of embeddings being task-specific for the downstream. The rest of this paper is organised as follows: the literature review that briefly surveys distributed representation of words, particularly word2vec; the methodology employed in this research work; the results obtained and the conclusion. Literature Review Breaking away from the non-distributed (high-dimensional, sparse) representations of words, typical of traditional bag-of-words or one-hot-encoding (BIBREF15), BIBREF0 created word2vec. Word2Vec consists of two shallow neural network architectures: continuous skipgram and CBoW. It uses distributed (low-dimensional, dense) representations of words that group similar words. This new model traded the complexity of deep neural network architectures, by other researchers, for more efficient training over large corpora. Its architectures have two training algorithms: negative sampling and hierarchical softmax (BIBREF16). The released model was trained on Google news dataset of 100 billion words. Implementations of the model have been undertaken by researchers in the programming languages Python and C++, though the original was done in C (BIBREF17). Continuous skipgram predicts (by maximizing classification of) words before and after the center word, for a given range. Since distant words are less connected to a center word in a sentence, less weight is assigned to such distant words in training. CBoW, on the other hand, uses words from the history and future in a sequence, with the objective of correctly classifying the target word in the middle. It works by projecting all history or future words within a chosen window into the same position, averaging their vectors. Hence, the order of words in the history or future does not influence the averaged vector. This is similar to the traditional bag-of-words, which is oblivious of the order of words in its sequence. A log-linear classifier is used in both architectures (BIBREF0). In further work, they extended the model to be able to do phrase representations and subsample frequent words (BIBREF16). Being a NNLM, word2vec assigns probabilities to words in a sequence, like other NNLMs such as feedforward networks or recurrent neural networks (BIBREF15). Earlier models like latent dirichlet allocation (LDA) and latent semantic analysis (LSA) exist and effectively achieve low dimensional vectors by matrix factorization (BIBREF18, BIBREF19). It's been shown that word vectors are beneficial for NLP tasks (BIBREF15), such as sentiment analysis and named entity recognition. Besides, BIBREF0 showed with vector space algebra that relationships among words can be evaluated, expressing the quality of vectors produced from the model. The famous, semantic example: vector("King") - vector("Man") + vector("Woman") $\approx $ vector("Queen") can be verified using cosine distance. Another type of semantic meaning is the relationship between a capital city and its corresponding country. Syntactic relationship examples include plural verbs and past tense, among others. Combination of both syntactic and semantic analyses is possible and provided (totaling over 19,000 questions) as Google analogy test set by BIBREF0. WordSimilarity-353 test set is another analysis tool for word vectors (BIBREF20). Unlike Google analogy score, which is based on vector space algebra, WordSimilarity is based on human expert-assigned semantic similarity on two sets of English word pairs. Both tools rank from 0 (totally dissimilar) to 1 (very much similar or exact, in Google analogy case). A typical artificial neural network (ANN) has very many hyper-parameters which may be tuned. Hyper-parameters are values which may be manually adjusted and include vector dimension size, type of algorithm and learning rate (BIBREF19). BIBREF0 tried various hyper-parameters with both architectures of their model, ranging from 50 to 1,000 dimensions, 30,000 to 3,000,000 vocabulary sizes, 1 to 3 epochs, among others. In our work, we extended research to 3,000 dimensions. Different observations were noted from the many trials. They observed diminishing returns after a certain point, despite additional dimensions or larger, unstructured training data. However, quality increased when both dimensions and data size were increased together. Although BIBREF16 pointed out that choice of optimal hyper-parameter configurations depends on the NLP problem at hand, they identified the most important factors are architecture, dimension size, subsampling rate, and the window size. In addition, it has been observed that variables like size of datasets improve the quality of word vectors and, potentially, performance on downstream tasks (BIBREF21, BIBREF0). Methodology The models were generated in a shared cluster running Ubuntu 16 with 32 CPUs of 32x Intel Xeon 4110 at 2.1GHz. Gensim (BIBREF17) python library implementation of word2vec was used with parallelization to utilize all 32 CPUs. The downstream experiments were run on a Tesla GPU on a shared DGX cluster running Ubuntu 18. Pytorch deep learning framework was used. Gensim was chosen because of its relative stability, popular support and to minimize the time required in writing and testing a new implementation in python from scratch. To form the vocabulary, words occurring less than 5 times in the corpora were dropped, stop words removed using the natural language toolkit (NLTK) (BIBREF22) and data pre-processing carried out. Table TABREF2 describes most hyper-parameters explored for each dataset. In all, 80 runs (of about 160 minutes) were conducted for the 15MB Wiki Abstract dataset with 80 serialized models totaling 15.136GB while 80 runs (for over 320 hours) were conducted for the 711MB SW dataset, with 80 serialized models totaling over 145GB. Experiments for all combinations for 300 dimensions were conducted on the 3.9GB training set of the BW corpus and additional runs for other dimensions for the window 8 + skipgram + heirarchical softmax combination to verify the trend of quality of word vectors as dimensions are increased. Google (semantic and syntactic) analogy tests and WordSimilarity-353 (with Spearman correlation) by BIBREF20 were chosen for intrinsic evaluations. They measure the quality of word vectors. The analogy scores are averages of both semantic and syntactic tests. NER and SA were chosen for extrinsic evaluations. The GMB dataset for NER was trained in an LSTM network, which had an embedding layer for input. The network diagram is shown in fig. FIGREF4. The IMDb dataset for SA was trained in a BiLSTM network, which also used an embedding layer for input. Its network diagram is given in fig. FIGREF4. It includes an additional hidden linear layer. Hyper-parameter details of the two networks for the downstream tasks are given in table TABREF3. The metrics for extrinsic evaluation include F1, precision, recall and accuracy scores. In both tasks, the default pytorch embedding was tested before being replaced by pre-trained embeddings released by BIBREF0 and ours. In each case, the dataset was shuffled before training and split in the ratio 70:15:15 for training, validation (dev) and test sets. Batch size of 64 was used. For each task, experiments for each embedding was conducted four times and an average value calculated and reported in the next section Results and Discussion Table TABREF5 summarizes key results from the intrinsic evaluations for 300 dimensions. Table TABREF6 reveals the training time (in hours) and average embedding loading time (in seconds) representative of the various models used. Tables TABREF11 and TABREF12 summarize key results for the extrinsic evaluations. Figures FIGREF7, FIGREF9, FIGREF10, FIGREF13 and FIGREF14 present line graph of the eight combinations for different dimension sizes for Simple Wiki, trend of Simple Wiki and Billion Word corpora over several dimension sizes, analogy score comparison for models across datasets, NER mean F1 scores on the GMB dataset and SA mean F1 scores on the IMDb dataset, respectively. Combination of the skipgram using hierarchical softmax and window size of 8 for 300 dimensions outperformed others in analogy scores for the Wiki Abstract. However, its results are so poor, because of the tiny file size, they're not worth reporting here. Hence, we'll focus on results from the Simple Wiki and Billion Word corpora. Best combination changes when corpus size increases, as will be noticed from table TABREF5. In terms of analogy score, for 10 epochs, w8s0h0 performs best while w8s1h0 performs best in terms of WordSim and corresponding Spearman correlation. Meanwhile, increasing the corpus size to BW, w4s1h0 performs best in terms of analogy score while w8s1h0 maintains its position as the best in terms of WordSim and Spearman correlation. Besides considering quality metrics, it can be observed from table TABREF6 that comparative ratio of values between the models is not commensurate with the results in intrinsic or extrinsic values, especially when we consider the amount of time and energy spent, since more training time results in more energy consumption (BIBREF23). Information on the length of training time for the released Mikolov model is not readily available. However, it's interesting to note that their presumed best model, which was released is also s1h0. Its analogy score, which we tested and report, is confirmed in their paper. It beats our best models in only analogy score (even for Simple Wiki), performing worse in others. This is inspite of using a much bigger corpus of 3,000,000 vocabulary size and 100 billion words while Simple Wiki had vocabulary size of 367,811 and is 711MB. It is very likely our analogy scores will improve when we use a much larger corpus, as can be observed from table TABREF5, which involves just one billion words. Although the two best combinations in analogy (w8s0h0 & w4s0h0) for SW, as shown in fig. FIGREF7, decreased only slightly compared to others with increasing dimensions, the increased training time and much larger serialized model size render any possible minimal score advantage over higher dimensions undesirable. As can be observed in fig. FIGREF9, from 100 dimensions, scores improve but start to drop after over 300 dimensions for SW and after over 400 dimensions for BW. More becomes worse! This trend is true for all combinations for all tests. Polynomial interpolation may be used to determine the optimal dimension in both corpora. Our models are available for confirmation and source codes are available on github. With regards to NER, most pretrained embeddings outperformed the default pytorch embedding, with our BW w4s1h0 model (which is best in BW analogy score) performing best in F1 score and closely followed by BIBREF0 model. On the other hand, with regards to SA, pytorch embedding outperformed the pretrained embeddings but was closely followed by our SW w8s0h0 model (which also had the best SW analogy score). BIBREF0 performed second worst of all, despite originating from a very huge corpus. The combinations w8s0h0 & w4s0h0 of SW performed reasonably well in both extrinsic tasks, just as the default pytorch embedding did. Conclusion This work analyses, empirically, optimal combinations of hyper-parameters for embeddings, specifically for word2vec. It further shows that for downstream tasks, like NER and SA, there's no silver bullet! However, some combinations show strong performance across tasks. Performance of embeddings is task-specific and high analogy scores do not necessarily correlate positively with performance on downstream tasks. This point on correlation is somewhat similar to results by BIBREF24 and BIBREF14. It was discovered that increasing dimension size depreciates performance after a point. If strong considerations of saving time, energy and the environment are made, then reasonably smaller corpora may suffice or even be better in some cases. The on-going drive by many researchers to use ever-growing data to train deep neural networks can benefit from the findings of this work. Indeed, hyper-parameter choices are very important in neural network systems (BIBREF19). Future work that may be investigated are performance of other architectures of word or sub-word embeddings, the performance and comparison of embeddings applied to languages other than English and how embeddings perform in other downstream tasks. In addition, since the actual reason for the changes in best model as corpus size increases is not clear, this will also be suitable for further research. The work on this project is partially funded by Vinnova under the project number 2019-02996 "Språkmodeller för svenska myndigheter" Acronyms Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What sentiment analysis dataset is used? Only give me the answer and do not output any other words.
Answer:
[ "IMDb dataset of movie reviews", "IMDb" ]
276
3,210
single_doc_qa
4f8414fbe992402e8acbc5b06a00986b
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: How are weights dynamically adjusted? Only give me the answer and do not output any other words.
Answer:
[ "One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified.", "associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds" ]
276
5,435
single_doc_qa
2b30943f644c4dbdb0a9394f740a8e4a
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The concept of message passing over graphs has been around for many years BIBREF0, BIBREF1, as well as that of graph neural networks (GNNs) BIBREF2, BIBREF3. However, GNNs have only recently started to be closely investigated, following the advent of deep learning. Some notable examples include BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10, BIBREF11, BIBREF12. These approaches are known as spectral. Their similarity with message passing (MP) was observed by BIBREF9 and formalized by BIBREF13 and BIBREF14. The MP framework is based on the core idea of recursive neighborhood aggregation. That is, at every iteration, the representation of each vertex is updated based on messages received from its neighbors. All spectral GNNs can be described in terms of the MP framework. GNNs have been applied with great success to bioinformatics and social network data, for node classification, link prediction, and graph classification. However, a few studies only have focused on the application of the MP framework to representation learning on text. This paper proposes one such application. More precisely, we represent documents as word co-occurrence networks, and develop an expressive MP GNN tailored to document understanding, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Evaluation on 10 document classification datasets show that our architectures learn representations that are competitive with the state-of-the-art. Furthermore, ablation experiments shed light on the impact of various architectural choices. In what follows, we first provide some background about the MP framework (in sec. SECREF2), thoroughly describe and explain MPAD (sec. SECREF3), present our experimental framework (sec. SECREF4), report and interpret our results (sec. SECREF5), and provide a review of the relevant literature (sec. SECREF6). Message Passing Neural Networks BIBREF13 proposed a MP framework under which many of the recently introduced GNNs can be reformulated. MP consists in an aggregation phase followed by a combination phase BIBREF14. More precisely, let $G(V,E)$ be a graph, and let us consider $v \in V$. At time $t+1$, a message vector $\mathbf {m}_v^{t+1}$ is computed from the representations of the neighbors $\mathcal {N}(v)$ of $v$: The new representation $\mathbf {h}^{t+1}_v$ of $v$ is then computed by combining its current feature vector $\mathbf {h}^{t}_v$ with the message vector $\mathbf {m}_v^{t+1}$: Messages are passed for $T$ time steps. Each step is implemented by a different layer of the MP network. Hence, iterations correspond to network depth. The final feature vector $\mathbf {h}_v^T$ of $v$ is based on messages propagated from all the nodes in the subtree of height $T$ rooted at $v$. It captures both the topology of the neighborhood of $v$ and the distribution of the vertex representations in it. If a graph-level feature vector is needed, e.g., for classification or regression, a READOUT pooling function, that must be invariant to permutations, is applied: Next, we present the MP network we developed for document understanding. Message Passing Attention network for Document understanding (MPAD) ::: Word co-occurrence networks We represent a document as a statistical word co-occurrence network BIBREF18, BIBREF19 with a sliding window of size 2 overspanning sentences. Let us denote that graph $G(V,E)$. Each unique word in the preprocessed document is represented by a node in $G$, and an edge is added between two nodes if they are found together in at least one instantiation of the window. $G$ is directed and weighted: edge directions and weights respectively capture text flow and co-occurrence counts. $G$ is a compact representation of its document. In $G$, immediate neighbors are consecutive words in the same sentence. That is, paths of length 2 correspond to bigrams. Paths of length more than 2 can correspond either to traditional $n$-grams or to relaxed $n$-grams, that is, words that never appear in the same sentence but co-occur with the same word(s). Such nodes are linked through common neighbors. Master node. Inspired by BIBREF3, our $G$ also includes a special document node, linked to all other nodes via unit weight bi-directional edges. In what follows, let us denote by $n$ the number of nodes in $G$, including the master node. Message Passing Attention network for Document understanding (MPAD) ::: Message passing We formulate our AGGREGATE function as: where $\mathbf {H}^t \in \mathbb {R}^{n \times d}$ contains node features ($d$ is a hyperparameter), and $\mathbf {A} \in \mathbb {R}^{n \times n}$ is the adjacency matrix of $G$. Since $G$ is directed, $\mathbf {A}$ is asymmetric. Also, $\mathbf {A}$ has zero diagonal as we choose not to consider the feature of the node itself, only that of its incoming neighbors, when updating its representation. Since $G$ is weighted, the $i^{th}$ row of $A$ contains the weights of the edges incoming on node $v_i$. $\mathbf {D} \in \mathbb {R}^{n \times n}$ is the diagonal in-degree matrix of $G$. MLP denotes a multi-layer perceptron, and $\mathbf {M}^{t+1} \in \mathbb {R}^{n \times d}$ is the message matrix. The use of a MLP was motivated by the observation that for graph classification, MP neural nets with 1-layer perceptrons are inferior to their MLP counterparts BIBREF14. Indeed, 1-layer perceptrons are not universal approximators of multiset functions. Note that like in BIBREF14, we use a different MLP at each layer. Renormalization. The rows of $\mathbf {D}^{-1}\mathbf {A}$ sum to 1. This is equivalent to the renormalization trick of BIBREF9, but using only the in-degrees. That is, instead of computing a weighted sum of the incoming neighbors' feature vectors, we compute a weighted average of them. The coefficients are proportional to the strength of co-occurrence between words. One should note that by averaging, we lose the ability to distinguish between different neighborhood structures in some special cases, that is, we lose injectivity. Such cases include neighborhoods in which all nodes have the same representations, and neighborhoods of different sizes containing various representations in equal proportions BIBREF14. As suggested by the results of an ablation experiment, averaging is better than summing in our application (see subsection SECREF30). Note that instead of simply summing/averaging, we also tried using GAT-like attention BIBREF11 in early experiments, without obtaining better results. As far as our COMBINE function, we use the Gated Recurrent Unit BIBREF20, BIBREF21: Omitting biases for readability, we have: where the $\mathbf {W}$ and $\mathbf {U}$ matrices are trainable weight matrices not shared across time steps, $\sigma (\mathbf {x}) = 1/(1+\exp (-\mathbf {x}))$ is the sigmoid function, and $\mathbf {R}$ and $\mathbf {Z}$ are the parameters of the reset and update gates. The reset gate controls the amount of information from the previous time step (in $\mathbf {H}^t$) that should propagate to the candidate representations, $\tilde{\mathbf {H}}^{t+1}$. The new representations $\mathbf {H}^{t+1}$ are finally obtained by linearly interpolating between the previous and the candidate ones, using the coefficients returned by the update gate. Interpretation. Updating node representations through a GRU should in principle allow nodes to encode a combination of local and global signals (low and high values of $t$, resp.), by allowing them to remember about past iterations. In addition, we also explicitly consider node representations at all iterations when reading out (see Eq. DISPLAY_FORM18). Message Passing Attention network for Document understanding (MPAD) ::: Readout After passing messages and performing updates for $T$ iterations, we obtain a matrix $\mathbf {H}^T \in \mathbb {R}^{n \times d}$ containing the final vertex representations. Let $\hat{G}$ be graph $G$ without the special document node, and matrix $\mathbf {\hat{H}}^T \in \mathbb {R}^{(n-1) \times d}$ be the corresponding representation matrix (i.e., $\mathbf {H}^T$ without the row of the document node). We use as our READOUT function the concatenation of self-attention applied to $\mathbf {\hat{H}}^T$ with the final document node representation. More precisely, we apply a global self-attention mechanism BIBREF22 to the rows of $\mathbf {\hat{H}}^T$. As shown in Eq. DISPLAY_FORM17, $\mathbf {\hat{H}}^T$ is first passed to a dense layer parameterized by matrix $\mathbf {W}_A^T \in \mathbb {R}^{d \times d}$. An alignment vector $\mathbf {a}$ is then derived by comparing, via dot products, the rows of the output of the dense layer $\mathbf {Y}^T \in \mathbb {R}^{(n-1) \times d}$ with a trainable vector $\mathbf {v}^T \in \mathbb {R}^d$ (initialized randomly) and normalizing with a softmax. The normalized alignment coefficients are finally used to compute the attentional vector $\mathbf {u}^T \in \mathbb {R}^d$ as a weighted sum of the final representations $\mathbf {\hat{H}}^T$. Note that we tried with multiple context vectors, i.e., with a matrix $\mathbf {V}^T$ instead of a vector $\mathbf {v}^T$, like in BIBREF22, but results were not convincing, even when adding a regularization term to the loss to favor diversity among the rows of $\mathbf {V}^T$. Master node skip connection. $\mathbf {h}_G^T \in \mathbb {R}^{2d}$ is obtained by concatenating $\mathbf {u}^T$ and the final master node representation. That is, the master node vector bypasses the attention mechanism. This is equivalent to a skip or shortcut connection BIBREF23. The reason behind this choice is that we expect the special document node to learn a high-level summary about the document, such as its size, vocabulary, etc. (more details are given in subsection SECREF30). Therefore, by making the master node bypass the attention layer, we directly inject global information about the document into its final representation. Multi-readout. BIBREF14, inspired by Jumping Knowledge Networks BIBREF12, recommend to not only use the final representations when performing readout, but also that of the earlier steps. Indeed, as one iterates, node features capture more and more global information. However, retaining more local, intermediary information might be useful too. Thus, instead of applying the readout function only to $t=T$, we apply it to all time steps and concatenate the results, finally obtaining $\mathbf {h}_G \in \mathbb {R}^{T \times 2d}$ : In effect, with this modification, we take into account features based on information aggregated from subtrees of different heights (from 1 to $T$), corresponding to local and global features. Message Passing Attention network for Document understanding (MPAD) ::: Hierarchical variants of MPAD Through the successive MP iterations, it could be argued that MPAD implicitly captures some soft notion of the hierarchical structure of documents (words $\rightarrow $ bigrams $\rightarrow $ compositions of bigrams, etc.). However, it might be beneficial to explicitly capture document hierarchy. Hierarchical architectures have brought significant improvements to many NLP tasks, such as language modeling and generation BIBREF24, BIBREF25, sentiment and topic classification BIBREF26, BIBREF27, and spoken language understanding BIBREF28, BIBREF29. Inspired by this line of research, we propose several hierarchical variants of MPAD, detailed in what follows. In all of them, we represent each sentence in the document as a word co-occurrence network, and obtain an embedding for it by applying MPAD as previously described. MPAD-sentence-att. Here, the sentence embeddings are simply combined through self-attention. MPAD-clique. In this variant, we build a complete graph where each node represents a sentence. We then feed that graph to MPAD, where the feature vectors of the nodes are initialized with the sentence embeddings previously obtained. MPAD-path. This variant is similar to the clique one, except that instead of a complete graph, we build a path according to the natural flow of the text. That is, two nodes are linked by a directed edge if the two sentences they represent follow each other in the document. Experiments ::: Datasets We evaluate the quality of the document embeddings learned by MPAD on 10 document classification datasets, covering the topic identification, coarse and fine sentiment analysis and opinion mining, and subjectivity detection tasks. We briefly introduce the datasets next. Their statistics are reported in Table TABREF21. (1) Reuters. This dataset contains stories collected from the Reuters news agency in 1987. Following common practice, we used the ModApte split and considered only the 10 classes with the highest number of positive training examples. We also removed documents belonging to more than one class and then classes left with no document (2 classes). (2) BBCSport BIBREF30 contains documents from the BBC Sport website corresponding to 2004-2005 sports news articles. (3) Polarity BIBREF31 features positive and negative labeled snippets from Rotten Tomatoes. (4) Subjectivity BIBREF32 contains movie review snippets from Rotten Tomatoes (subjective sentences), and Internet Movie Database plot summaries (objective sentences). (5) MPQA BIBREF33 is made of positive and negative phrases, annotated as part of the summer 2002 NRRC Workshop on Multi-Perspective Question Answering. (6) IMDB BIBREF34 is a collection of highly polarized movie reviews from IMDB (positive and negative). There are at most 30 reviews for each movie. (7) TREC BIBREF35 consists of questions that are classified into 6 different categories. (8) SST-1 BIBREF36 contains the same snippets as Polarity. The authors used the Stanford Parser to parse the snippets and split them into multiple sentences. They then used Amazon Mechanical Turk to annotate the resulting phrases according to their polarity (very negative, negative, neutral, positive, very positive). (9) SST-2 BIBREF36 is the same as SST-1 but with neutral reviews removed and snippets classified as positive or negative. (10) Yelp2013 BIBREF26 features reviews obtained from the 2013 Yelp Dataset Challenge. Experiments ::: Baselines We evaluate MPAD against multiple state-of-the-art baseline models, including hierarchical ones, to enable fair comparison with the hierarchical MPAD variants. doc2vec BIBREF37. Doc2vec (or paragraph vector) is an extension of word2vec that learns vectors for documents in a fully unsupervised manner. Document embeddings are then fed to a logistic regression classifier. CNN BIBREF38. The convolutional neural network architecture, well-known in computer vision, is applied to text. There is one spatial dimension and the word embeddings are used as channels (depth dimensions). DAN BIBREF39. The Deep Averaging Network passes the unweighted average of the embeddings of the input words through multiple dense layers and a final softmax. Tree-LSTM BIBREF40 is a generalization of the standard LSTM architecture to constituency and dependency parse trees. DRNN BIBREF41. Recursive neural networks are stacked and applied to parse trees. LSTMN BIBREF42 is an extension of the LSTM model where the memory cell is replaced by a memory network which stores word representations. C-LSTM BIBREF43 combines convolutional and recurrent neural networks. The region embeddings provided by a CNN are fed to a LSTM. SPGK BIBREF44 also models documents as word co-occurrence networks. It computes a graph kernel that compares shortest paths extracted from the word co-occurrence networks and then uses a SVM to categorize documents. WMD BIBREF45 is an application of the well-known Earth Mover's Distance to text. A k-nearest neighbor classifier is used. S-WMD BIBREF46 is a supervised extension of the Word Mover's Distance. Semantic-CNN BIBREF47. Here, a CNN is applied to semantic units obtained by clustering words in the embedding space. LSTM-GRNN BIBREF26 is a hierarchical model where sentence embeddings are obtained with a CNN and a GRU-RNN is fed the sentence representations to obtain a document vector. HN-ATT BIBREF27 is another hierarchical model, where the same encoder architecture (a bidirectional GRU-RNN) is used for both sentences and documents, with different parameters. A self-attention mechanism is applied to the RNN annotations at each level. Experiments ::: Model configuration and training We preprocess all datasets using the code of BIBREF38. On Yelp2013, we also replace all tokens appearing strictly less than 6 times with a special UNK token, like in BIBREF27. We then build a directed word co-occurrence network from each document, with a window of size 2. We use two MP iterations ($T$=2) for the basic MPAD, and two MP iterations at each level, for the hierarchical variants. We set $d$ to 64, except on IMDB and Yelp on which $d=128$, and use a two-layer MLP. The final graph representations are passed through a softmax for classification. We train MPAD in an end-to-end fashion by minimizing the cross-entropy loss function with the Adam optimizer BIBREF48 and an initial learning rate of 0.001. To regulate potential differences in magnitude, we apply batch normalization after concatenating the feature vector of the master node with the self-attentional vector, that is, after the skip connection (see subsection SECREF16). To prevent overfitting, we use dropout BIBREF49 with a rate of 0.5. We select the best epoch, capped at 200, based on the validation accuracy. When cross-validation is used (see 3rd column of Table TABREF21), we construct a validation set by randomly sampling 10% of the training set of each fold. On all datasets except Yelp2013, we use the publicly available 300-dimensional pre-trained Google News vectors ($D$=300) BIBREF50 to initialize the node representations $\mathbf {H}^0$. On Yelp2013, we follow BIBREF27 and learn our own word vectors from the training and validation sets with the gensim implementation of word2vec BIBREF51. MPAD was implemented in Python 3.6 using the PyTorch library BIBREF52. All experiments were run on a single machine consisting of a 3.4 GHz Intel Core i7 CPU with 16 GB of RAM and an NVidia GeForce Titan Xp GPU. Results and ablations ::: Results Experimental results are shown in Table TABREF28. For the baselines, the best scores reported in each original paper are shown. MPAD reaches best performance on 7 out of 10 datasets, and is close second elsewhere. Moreover, the 7 datasets on which MPAD ranks first widely differ in training set size, number of categories, and prediction task (topic, sentiment, subjectivity), which indicates that MPAD can perform well in different settings. MPAD vs. hierarchical variants. On 9 datasets out of 10, one or more of the hierarchical variants outperform the vanilla MPAD architecture, highlighting the benefit of explicitly modeling the hierarchical nature of documents. However, on Subjectivity, standard MPAD outperforms all hierarchical variants. On TREC, it reaches the same accuracy. We hypothesize that in some cases, using a different graph to separately encode each sentence might be worse than using one single graph to directly encode the document. Indeed, in the single document graph, some words that never appear in the same sentence can be connected through common neighbors, as was explained in subsection SECREF7. So, this way, some notion of cross-sentence context is captured while learning representations of words, bigrams, etc. at each MP iteration. This creates better informed representations, resulting in a better document embedding. With the hierarchical variants, on the other hand, each sentence vector is produced in isolation, without any contextual information about the other sentences in the document. Therefore, the final sentence embeddings might be of lower quality, and as a group might also contain redundant/repeated information. When the sentence vectors are finally combined into a document representation, it is too late to take context into account. Results and ablations ::: Ablation studies To understand the impact of some hyperparameters on performance, we conducted additional experiments on the Reuters, Polarity, and IMDB datasets, with the non-hierarchical version of MPAD. Results are shown in Table TABREF29. Number of MP iterations. First, we varied the number of message passing iterations from 1 to 4. We can clearly see in Table TABREF29 that having more iterations improves performance. We attribute this to the fact that we are reading out at each iteration from 1 to $T$ (see Eq. DISPLAY_FORM18), which enables the final graph representation to encode a mixture of low-level and high-level features. Indeed, in initial experiments involving readout at $t$=$T$ only, setting $T\ge 2$ was always decreasing performance, despite the GRU-based updates (Eq. DISPLAY_FORM14). These results were consistent with that of BIBREF53 and BIBREF9, who both are reading out only at $t$=$T$ too. We hypothesize that node features at $T\ge 2$ are too diffuse to be entirely relied upon during readout. More precisely, initially at $t$=0, node representations capture information about words, at $t$=1, about their 1-hop neighborhood (bigrams), at $t$=2, about compositions of bigrams, etc. Thus, pretty quickly, node features become general and diffuse. In such cases, considering also the lower-level, more precise features of the earlier iterations when reading out may be necessary. Undirected edges. On Reuters, using an undirected graph leads to better performance, while on Polarity and IMDB, it is the opposite. This can be explained by the fact that Reuters is a topic classification task, for which the presence or absence of some patterns is important, but not necessarily the order in which they appear, while Polarity and IMDB are sentiment analysis tasks. To capture sentiment, modeling word order is crucial, e.g., in detecting negation. No master node. Removing the master node deteriorates performance across all datasets, clearly showing the value of having such a node. We hypothesize that since the special document node is connected to all other nodes, it is able to encode during message passing a summary of the document. No renormalization. Here, we do not use the renormalization trick of BIBREF9 during MP (see subsection SECREF10). That is, Eq. DISPLAY_FORM11 becomes $\mathbf {M}^{t+1} = \textsc {MLP}^{t+1}\big (\mathbf {A}\mathbf {H}^{t}\big )$. In other words, instead of computing a weighted average of the incoming neighbors' feature vectors, we compute a weighted sum of them. Unlike the mean, which captures distributions, the sum captures structural information BIBREF14. As shown in Table TABREF29, using sum instead of mean decreases performance everywhere, suggesting that in our application, capturing the distribution of neighbor representations is more important that capturing their structure. We hypothesize that this is the case because statistical word co-occurrence networks tend to have similar structural properties, regardless of the topic, polarity, sentiment, etc. of the corresponding documents. Neighbors-only. In this experiment, we replaced the GRU combine function (see Eq. DISPLAY_FORM14) with the identity function. That is, we simply have $\mathbf {H}^{t+1}$=$\mathbf {M}^{t+1}$. Since $\mathbf {A}$ has zero diagonal, by doing so, we completely ignore the previous feature of the node itself when updating its representation. That is, the update is based entirely on its neighbors. Except on Reuters (almost no change), performance always suffers, stressing the need to take into account the root node during updates, not only its neighborhood. Related work In what follows, we offer a brief review of relevant studies, ranked by increasing order of similarity with our work. BIBREF9, BIBREF54, BIBREF11, BIBREF10 conduct some node classification experiments on citation networks, where nodes are scientific papers, i.e., textual data. However, text is only used to derive node feature vectors. The external graph structure, which plays a central role in determining node labels, is completely unrelated to text. On the other hand, BIBREF55, BIBREF7 experiment on traditional document classification tasks. They both build $k$-nearest neighbor similarity graphs based on the Gaussian diffusion kernel. More precisely, BIBREF55 build one single graph where nodes are documents and distance is computed in the BoW space. Node features are then used for classification. Closer to our work, BIBREF7 represent each document as a graph. All document graphs are derived from the same underlying structure. Only node features, corresponding to the entries of the documents' BoW vectors, vary. The underlying, shared structure is that of a $k$-NN graph where nodes are vocabulary terms and similarity is the cosine of the word embedding vectors. BIBREF7 then perform graph classification. However they found performance to be lower than that of a naive Bayes classifier. BIBREF56 use a GNN for hierarchical classification into a large taxonomy of topics. This task differs from traditional document classification. The authors represent documents as unweighted, undirected word co-occurrence networks with word embeddings as node features. They then use the spatial GNN of BIBREF15 to perform graph classification. The work closest to ours is probably that of BIBREF53. The authors adopt the semi-supervised node classification approach of BIBREF9. They build one single undirected graph from the entire dataset, with both word and document nodes. Document-word edges are weighted by TF-IDF and word-word edges are weighted by pointwise mutual information derived from co-occurrence within a sliding window. There are no document-document edges. The GNN is trained based on the cross-entropy loss computed only for the labeled nodes, that is, the documents in the training set. When the final node representations are obtained, one can use that of the test documents to classify them and evaluate prediction performance. There are significant differences between BIBREF53 and our work. First, our approach is inductive, not transductive. Indeed, while the node classification approach of BIBREF53 requires all test documents at training time, our graph classification model is able to perform inference on new, never-seen documents. The downside of representing documents as separate graphs, however, is that we lose the ability to capture corpus-level dependencies. Also, our directed graphs capture word ordering, which is ignored by BIBREF53. Finally, the approach of BIBREF53 requires computing the PMI for every word pair in the vocabulary, which may be prohibitive on datasets with very large vocabularies. On the other hand, the complexity of MPAD does not depend on vocabulary size. Conclusion We have proposed an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). Experiments conducted on 10 standard text classification datasets show that our architecture is competitive with the state-of-the-art. By processing weighted, directed word co-occurrence networks, MPAD is sensitive to word order and word-word relationship strength. To explicitly capture the hierarchical structure of documents, we also propose three hierarchical variants of MPAD, that we show bring improvements over the vanilla architecture. Acknowledgments We thank the NVidia corporation for the donation of a GPU as part of their GPU grant program. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Which component is the least impactful? Only give me the answer and do not output any other words.
Answer:
[ "Based on table results provided changing directed to undirected edges had least impact - max abs difference of 0.33 points on all three datasets." ]
276
6,132
single_doc_qa
005cfff7a79348379cd33deb0586925a
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Data imbalance is a common issue in a variety of NLP tasks such as tagging and machine reading comprehension. Table TABREF3 gives concrete examples: for the Named Entity Recognition (NER) task BIBREF2, BIBREF3, most tokens are backgrounds with tagging class $O$. Specifically, the number of tokens tagging class $O$ is 5 times as many as those with entity labels for the CoNLL03 dataset and 8 times for the OntoNotes5.0 dataset; Data-imbalanced issue is more severe for MRC tasks BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8 with the value of negative-positive ratio being 50-200. Data imbalance results in the following two issues: (1) the training-test discrepancy: Without balancing the labels, the learning process tends to converge to a point that strongly biases towards class with the majority label. This actually creates a discrepancy between training and test: at training time, each training instance contributes equally to the objective function while at test time, F1 score concerns more about positive examples; (2) the overwhelming effect of easy-negative examples. As pointed out by meng2019dsreg, significantly large number of negative examples also means that the number of easy-negative example is large. The huge number of easy examples tends to overwhelm the training, making the model not sufficiently learned to distinguish between positive examples and hard-negative examples. The cross-entropy objective (CE for short) or maximum likelihood (MLE) objective, which is widely adopted as the training objective for data-imbalanced NLP tasks BIBREF9, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, handles neither of the issues. To handle the first issue, we propose to replace CE or MLE with losses based on the Sørensen–Dice coefficient BIBREF0 or Tversky index BIBREF1. The Sørensen–Dice coefficient, dice loss for short, is the harmonic mean of precision and recall. It attaches equal importance to false positives (FPs) and false negatives (FNs) and is thus more immune to data-imbalanced datasets. Tversky index extends dice loss by using a weight that trades precision and recall, which can be thought as the approximation of the $F_{\beta }$ score, and thus comes with more flexibility. Therefore, We use dice loss or Tversky index to replace CE loss to address the first issue. Only using dice loss or Tversky index is not enough since they are unable to address the dominating influence of easy-negative examples. This is intrinsically because dice loss is actually a hard version of the F1 score. Taking the binary classification task as an example, at test time, an example will be classified as negative as long as its probability is smaller than 0.5, but training will push the value to 0 as much as possible. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones. Inspired by the idea of focal loss BIBREF16 in computer vision, we propose a dynamic weight adjusting strategy, which associates each training example with a weight in proportion to $(1-p)$, and this weight dynamically changes as training proceeds. This strategy helps to deemphasize confident examples during training as their $p$ approaches the value of 1, makes the model attentive to hard-negative examples, and thus alleviates the dominating effect of easy-negative examples. Combing both strategies, we observe significant performance boosts on a wide range of data imbalanced NLP tasks. Notably, we are able to achieve SOTA results on CTB5 (97.92, +1.86), CTB6 (96.57, +1.80) and UD1.4 (96.98, +2.19) for the POS task; SOTA results on CoNLL03 (93.33, +0.29), OntoNotes5.0 (92.07, +0.96)), MSRA 96.72(+0.97) and OntoNotes4.0 (84.47,+2.36) for the NER task; along with competitive results on the tasks of machine reading comprehension and paraphrase identification. The rest of this paper is organized as follows: related work is presented in Section 2. We describe different training objectives in Section 3. Experimental results are presented in Section 4. We perform ablation studies in Section 5, followed by a brief conclusion in Section 6. Related Work ::: Data Resample The idea of weighting training examples has a long history. Importance sampling BIBREF17 assigns weights to different samples and changes the data distribution. Boosting algorithms such as AdaBoost BIBREF18 select harder examples to train subsequent classifiers. Similarly, hard example mining BIBREF19 downsamples the majority class and exploits the most difficult examples. Oversampling BIBREF20, BIBREF21 is used to balance the data distribution. Another line of data resampling is to dynamically control the weights of examples as training proceeds. For example, focal loss BIBREF16 used a soft weighting scheme that emphasizes harder examples during training. In self-paced learning BIBREF22, example weights are obtained through optimizing the weighted training loss which encourages learning easier examples first. At each training step, self-paced learning algorithm optimizes model parameters and example weights jointly. Other works BIBREF23, BIBREF24 adjusted the weights of different training examples based on training loss. Besides, recent work BIBREF25, BIBREF26 proposed to learn a separate network to predict sample weights. Related Work ::: Data Imbalance Issue in Object Detection The background-object label imbalance issue is severe and thus well studied in the field of object detection BIBREF27, BIBREF28, BIBREF29, BIBREF30, BIBREF31. The idea of hard negative mining (HNM) BIBREF30 has gained much attention recently. shrivastava2016ohem proposed the online hard example mining (OHEM) algorithm in an iterative manner that makes training progressively more difficult, and pushes the model to learn better. ssd2016liu sorted all of the negative samples based on the confidence loss and picking the training examples with the negative-positive ratio at 3:1. pang2019rcnn proposed a novel method called IoU-balanced sampling and aploss2019chen designed a ranking model to replace the conventional classification task with a average-precision loss to alleviate the class imbalance issue. The efforts made on object detection have greatly inspired us to solve the data imbalance issue in NLP. Losses ::: Notation For illustration purposes, we use the binary classification task to demonstrate how different losses work. The mechanism can be easily extended to multi-class classification. Let $\lbrace x_i\rbrace $ denote a set of instances. Each $x_i$ is associated with a golden label vector $y_i = [y_{i0},y_{i1} ]$, where $y_{i1}\in \lbrace 0,1\rbrace $ and $y_{i0}\in \lbrace 0,1\rbrace $ respectively denote the positive and negative classes, and thus $y_i$ can be either $[0,1]$ or $[0,1]$. Let $p_i = [p_{i0},p_{i1} ]$ denote the probability vector, and $p_{i1}$ and $p_{i0}$ respectively denote the probability that a model assigns the positive and negative label to $x_i$. Losses ::: Cross Entropy Loss The vanilla cross entropy (CE) loss is given by: As can be seen from Eq.DISPLAY_FORM8, each $x_i$ contributes equally to the final objective. Two strategies are normally used to address the the case where we wish that not all $x_i$ are treated equal: associating different classes with different weighting factor $\alpha $ or resampling the datasets. For the former, Eq.DISPLAY_FORM8 is adjusted as follows: where $\alpha _i\in [0,1]$ may be set by the inverse class frequency or treated as a hyperparameter to set by cross validation. In this work, we use $\lg (\frac{n-n_t}{n_t}+K)$ to calculate the coefficient $\alpha $, where $n_t$ is the number of samples with class $t$ and $n$ is the total number of samples in the training set. $K$ is a hyperparameter to tune. The data resampling strategy constructs a new dataset by sampling training examples from the original dataset based on human-designed criteria, e.g., extract equal training samples from each class. Both strategies are equivalent to changing the data distribution and thus are of the same nature. Empirically, these two methods are not widely used due to the trickiness of selecting $\alpha $ especially for multi-class classification tasks and that inappropriate selection can easily bias towards rare classes BIBREF32. Losses ::: Dice coefficient and Tversky index Sørensen–Dice coefficient BIBREF0, BIBREF33, dice coefficient (DSC) for short, is a F1-oriented statistic used to gauge the similarity of two sets. Given two sets $A$ and $B$, the dice coefficient between them is given as follows: In our case, $A$ is the set that contains of all positive examples predicted by a specific model, and $B$ is the set of all golden positive examples in the dataset. When applied to boolean data with the definition of true positive (TP), false positive (FP), and false negative (FN), it can be then written as follows: For an individual example $x_i$, its corresponding DSC loss is given as follows: As can be seen, for a negative example with $y_{i1}=0$, it does not contribute to the objective. For smoothing purposes, it is common to add a $\gamma $ factor to both the nominator and the denominator, making the form to be as follows: As can be seen, negative examples, with $y_{i1}$ being 0 and DSC being $\frac{\gamma }{ p_{i1}+\gamma }$, also contribute to the training. Additionally, milletari2016v proposed to change the denominator to the square form for faster convergence, which leads to the following dice loss (DL): Another version of DL is to directly compute set-level dice coefficient instead of the sum of individual dice coefficient. We choose the latter due to ease of optimization. Tversky index (TI), which can be thought as the approximation of the $F_{\beta }$ score, extends dice coefficient to a more general case. Given two sets $A$ and $B$, tversky index is computed as follows: Tversky index offers the flexibility in controlling the tradeoff between false-negatives and false-positives. It degenerates to DSC if $\alpha =\beta =0.5$. The Tversky loss (TL) for the training set $\lbrace x_i,y_i\rbrace $ is thus as follows: Losses ::: Self-adusting Dice Loss Consider a simple case where the dataset consists of only one example $x_i$, which is classified as positive as long as $p_{i1}$ is larger than 0.5. The computation of $F1$ score is actually as follows: Comparing Eq.DISPLAY_FORM14 with Eq.DISPLAY_FORM22, we can see that Eq.DISPLAY_FORM14 is actually a soft form of $F1$, using a continuous $p$ rather than the binary $\mathbb {I}( p_{i1}>0.5)$. This gap isn't a big issue for balanced datasets, but is extremely detrimental if a big proportion of training examples are easy-negative ones: easy-negative examples can easily dominate training since their probabilities can be pushed to 0 fairly easily. Meanwhile, the model can hardly distinguish between hard-negative examples and positive ones, which has a huge negative effect on the final F1 performance. To address this issue, we propose to multiply the soft probability $p$ with a decaying factor $(1-p)$, changing Eq.DISPLAY_FORM22 to the following form: One can think $(1-p_{i1})$ as a weight associated with each example, which changes as training proceeds. The intuition of changing $p_{i1}$ to $(1-p_{i1}) p_{i1}$ is to push down the weight of easy examples. For easy examples whose probability are approaching 0 or 1, $(1-p_{i1}) p_{i1}$ makes the model attach significantly less focus to them. Figure FIGREF23 gives gives an explanation from the perspective in derivative: the derivative of $\frac{(1-p)p}{1+(1-p)p}$ with respect to $p$ approaches 0 immediately after $p$ approaches 0, which means the model attends less to examples once they are correctly classified. A close look at Eq.DISPLAY_FORM14 reveals that it actually mimics the idea of focal loss (FL for short) BIBREF16 for object detection in vision. Focal loss was proposed for one-stage object detector to handle foreground-background tradeoff encountered during training. It down-weights the loss assigned to well-classified examples by adding a $(1-p)^{\beta }$ factor, leading the final loss to be $(1-p)^{\beta }\log p$. In Table TABREF18, we show the losses used in our experiments, which is described in the next section. Experiments We evaluate the proposed method on four NLP tasks: part-of-speech tagging, named entity recognition, machine reading comprehension and paraphrase identification. Baselines in our experiments are optimized by using the standard cross-entropy training objective. Experiments ::: Part-of-Speech Tagging Part-of-speech tagging (POS) is the task of assigning a label (e.g., noun, verb, adjective) to each word in a given text. In this paper, we choose BERT as the backbone and conduct experiments on three Chinese POS datasets. We report the span-level micro-averaged precision, recall and F1 for evaluation. Hyperparameters are tuned on the corresponding development set of each dataset. Experiments ::: Part-of-Speech Tagging ::: Datasets We conduct experiments on the widely used Chinese Treebank 5.0, 6.0 as well as UD1.4. CTB5 is a Chinese dataset for tagging and parsing, which contains 507,222 words, 824,983 characters and 18,782 sentences extracted from newswire sources. CTB6 is an extension of CTB5, containing 781,351 words, 1,285,149 characters and 28,295 sentences. UD is the abbreviation of Universal Dependencies, which is a framework for consistent annotation of grammar (parts of speech, morphological features, and syntactic dependencies) across different human languages. In this work, we use UD1.4 for Chinese POS tagging. Experiments ::: Part-of-Speech Tagging ::: Baselines We use the following baselines: Joint-POS: shao2017character jointly learns Chinese word segmentation and POS. Lattice-LSTM: lattice2018zhang constructs a word-character lattice. Bert-Tagger: devlin2018bert treats part-of-speech as a tagging task. Experiments ::: Part-of-Speech Tagging ::: Results Table presents the experimental results on the POS task. As can be seen, the proposed DSC loss outperforms the best baseline results by a large margin, i.e., outperforming BERT-tagger by +1.86 in terms of F1 score on CTB5, +1.80 on CTB6 and +2.19 on UD1.4. As far as we are concerned, we are achieving SOTA performances on the three datasets. Weighted cross entropy and focal loss only gain a little performance improvement on CTB5 and CTB6, and the dice loss obtains huge gain on CTB5 but not on CTB6, which indicates the three losses are not consistently robust in resolving the data imbalance issue. The proposed DSC loss performs robustly on all the three datasets. Experiments ::: Named Entity Recognition Named entity recognition (NER) refers to the task of detecting the span and semantic category of entities from a chunk of text. Our implementation uses the current state-of-the-art BERT-MRC model proposed by xiaoya2019ner as a backbone. For English datasets, we use BERT$_\text{Large}$ English checkpoints, while for Chinese we use the official Chinese checkpoints. We report span-level micro-averaged precision, recall and F1-score. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Named Entity Recognition ::: Datasets For the NER task, we consider both Chinese datasets, i.e., OntoNotes4.0 BIBREF34 and MSRA BIBREF35, and English datasets, i.e., CoNLL2003 BIBREF36 and OntoNotes5.0 BIBREF37. CoNLL2003 is an English dataset with 4 entity types: Location, Organization, Person and Miscellaneous. We followed data processing protocols in BIBREF14. English OntoNotes5.0 consists of texts from a wide variety of sources and contains 18 entity types. We use the standard train/dev/test split of CoNLL2012 shared task. Chinese MSRA performs as a Chinese benchmark dataset containing 3 entity types. Data in MSRA is collected from news domain. Since the development set is not provided in the original MSRA dataset, we randomly split the training set into training and development splits by 9:1. We use the official test set for evaluation. Chinese OntoNotes4.0 is a Chinese dataset and consists of texts from news domain, which has 18 entity types. In this paper, we take the same data split as wu2019glyce did. Experiments ::: Named Entity Recognition ::: Baselines We use the following baselines: ELMo: a tagging model from peters2018deep. Lattice-LSTM: lattice2018zhang constructs a word-character lattice, only used in Chinese datasets. CVT: from kevin2018cross, which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. Bert-Tagger: devlin2018bert treats NER as a tagging task. Glyce-BERT: wu2019glyce combines glyph information with BERT pretraining. BERT-MRC: The current SOTA model for both Chinese and English NER datasets proposed by xiaoya2019ner, which formulate NER as machine reading comprehension task. Experiments ::: Named Entity Recognition ::: Results Table shows experimental results on NER datasets. For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively. We observe huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively. As far as we are concerned, we are setting new SOTA performances on all of the four NER datasets. Experiments ::: Machine Reading Comprehension Machine reading comprehension (MRC) BIBREF39, BIBREF40, BIBREF41, BIBREF40, BIBREF42, BIBREF15 has become a central task in natural language understanding. MRC in the SQuAD-style is to predict the answer span in the passage given a question and the passage. In this paper, we choose the SQuAD-style MRC task and report Extract Match (EM) in addition to F1 score on validation set. All hyperparameters are tuned on the development set of each dataset. Experiments ::: Machine Reading Comprehension ::: Datasets The following five datasets are used for MRC task: SQuAD v1.1, SQuAD v2.0 BIBREF4, BIBREF6 and Quoref BIBREF8. SQuAD v1.1 and SQuAD v2.0 are the most widely used QA benchmarks. SQuAD1.1 is a collection of 100K crowdsourced question-answer pairs, and SQuAD2.0 extends SQuAD1.1 allowing no short answer exists in the provided passage. Quoref is a QA dataset which tests the coreferential reasoning capability of reading comprehension systems, containing 24K questions over 4.7K paragraphs from Wikipedia. Experiments ::: Machine Reading Comprehension ::: Baselines We use the following baselines: QANet: qanet2018 builds a model based on convolutions and self-attention. Convolution to model local interactions and self-attention to model global interactions. BERT: devlin2018bert treats NER as a tagging task. XLNet: xlnet2019 proposes a generalized autoregressive pretraining method that enables learning bidirectional contexts. Experiments ::: Machine Reading Comprehension ::: Results Table shows the experimental results for MRC tasks. With either BERT or XLNet, our proposed DSC loss obtains significant performance boost on both EM and F1. For SQuADv1.1, our proposed method outperforms XLNet by +1.25 in terms of F1 score and +0.84 in terms of EM and achieves 87.65 on EM and 89.51 on F1 for SQuAD v2.0. Moreover, on QuoRef, the proposed method surpasses XLNet results by +1.46 on EM and +1.41 on F1. Another observation is that, XLNet outperforms BERT by a huge margin, and the proposed DSC loss can obtain further performance improvement by an average score above 1.0 in terms of both EM and F1, which indicates the DSC loss is complementary to the model structures. Experiments ::: Paraphrase Identification Paraphrases are textual expressions that have the same semantic meaning using different surface words. Paraphrase identification (PI) is the task of identifying whether two sentences have the same meaning or not. We use BERT BIBREF11 and XLNet BIBREF43 as backbones and report F1 score for comparison. Hyperparameters are tuned on the development set of each dataset. Experiments ::: Paraphrase Identification ::: Datasets We conduct experiments on two widely used datasets for PI task: MRPC BIBREF44 and QQP. MRPC is a corpus of sentence pairs automatically extracted from online news sources, with human annotations of whether the sentence pairs are semantically equivalent. The MRPC dataset has imbalanced classes (68% positive, 32% for negative). QQP is a collection of question pairs from the community question-answering website Quora. The class distribution in QQP is also unbalanced (37% positive, 63% negative). Experiments ::: Paraphrase Identification ::: Results Table shows the results for PI task. We find that replacing the training objective with DSC introduces performance boost for both BERT and XLNet. Using DSC loss improves the F1 score by +0.58 for MRPC and +0.73 for QQP. Ablation Studies ::: The Effect of Dice Loss on Accuracy-oriented Tasks We argue that the most commonly used cross-entropy objective is actually accuracy-oriented, whereas the proposed dice loss (DL) performs as a hard version of F1-score. To explore the effect of the dice loss on accuracy-oriented tasks such as text classification, we conduct experiments on the Stanford Sentiment Treebank sentiment classification datasets including SST-2 and SST-5. We fine-tune BERT$_\text{Large}$ with different training objectives. Experiment results for SST are shown in . For SST-5, BERT with CE achieves 55.57 in terms of accuracy, with DL and DSC losses slightly degrade the accuracy performance and achieve 54.63 and 55.19, respectively. For SST-2, BERT with CE achieves 94.9 in terms of accuracy. The same as SST-5, we observe a slight performance drop with DL and DSC, which means that the dice loss actually works well for F1 but not for accuracy. Ablation Studies ::: The Effect of Hyperparameters in Tversky index As mentioned in Section SECREF10, Tversky index (TI) offers the flexibility in controlling the tradeoff between false-negatives and false-positives. In this subsection, we explore the effect of hyperparameters (i.e., $\alpha $ and $\beta $) in TI to test how they manipulate the tradeoff. We conduct experiments on the Chinese OntoNotes4.0 NER dataset and English QuoRef MRC dataset to examine the influence of tradeoff between precision and recall. Experiment results are shown in Table . The highest F1 for Chinese OntoNotes4.0 is 84.67 when $\alpha $ is set to 0.6 while for QuoRef, the highest F1 is 68.44 when $\alpha $ is set to 0.4. In addition, we can observe that the performance varies a lot as $\alpha $ changes in distinct datasets, which shows that the hyperparameters $\alpha ,\beta $ play an important role in the proposed method. Conclusion In this paper, we alleviate the severe data imbalance issue in NLP tasks. We propose to use dice loss in replacement of the standard cross-entropy loss, which performs as a soft version of F1 score. Using dice loss can help narrow the gap between training objectives and evaluation metrics. Empirically, we show that the proposed training objective leads to significant performance boost for part-of-speech, named entity recognition, machine reading comprehension and paraphrase identification tasks. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are method's improvements of F1 for NER task for English and Chinese datasets? Only give me the answer and do not output any other words.
Answer:
[ "English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively, Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively", "For English datasets including CoNLL2003 and OntoNotes5.0, our proposed method outperforms BERT-MRCBIBREF38 by +0.29 and +0.96 respectively., huge performance boosts on Chinese datasets, achieving F1 improvements by +0.97 and +2.36 on MSRA and OntoNotes4.0, respectively" ]
276
5,435
single_doc_qa
b5cfce0a99a941bf9071c32c7f0e99ef
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction Machine Reading Comprehension (MRC), as the name suggests, requires a machine to read a passage and answer its relevant questions. Since the answer to each question is supposed to stem from the corresponding passage, a common MRC solution is to develop a neural-network-based MRC model that predicts an answer span (i.e. the answer start position and the answer end position) from the passage of each given passage-question pair. To facilitate the explorations and innovations in this area, many MRC datasets have been established, such as SQuAD BIBREF0 , MS MARCO BIBREF1 , and TriviaQA BIBREF2 . Consequently, many pioneering MRC models have been proposed, such as BiDAF BIBREF3 , R-NET BIBREF4 , and QANet BIBREF5 . According to the leader board of SQuAD, the state-of-the-art MRC models have achieved the same performance as human beings. However, does this imply that they have possessed the same reading comprehension ability as human beings? OF COURSE NOT. There is a huge gap between MRC models and human beings, which is mainly reflected in the hunger for data and the robustness to noise. On the one hand, developing MRC models requires a large amount of training examples (i.e. the passage-question pairs labeled with answer spans), while human beings can achieve good performance on evaluation examples (i.e. the passage-question pairs to address) without training examples. On the other hand, BIBREF6 revealed that intentionally injected noise (e.g. misleading sentences) in evaluation examples causes the performance of MRC models to drop significantly, while human beings are far less likely to suffer from this. The reason for these phenomena, we believe, is that MRC models can only utilize the knowledge contained in each given passage-question pair, but in addition to this, human beings can also utilize general knowledge. A typical category of general knowledge is inter-word semantic connections. As shown in Table TABREF1 , such general knowledge is essential to the reading comprehension ability of human beings. A promising strategy to bridge the gap mentioned above is to integrate the neural networks of MRC models with the general knowledge of human beings. To this end, it is necessary to solve two problems: extracting general knowledge from passage-question pairs and utilizing the extracted general knowledge in the prediction of answer spans. The first problem can be solved with knowledge bases, which store general knowledge in structured forms. A broad variety of knowledge bases are available, such as WordNet BIBREF7 storing semantic knowledge, ConceptNet BIBREF8 storing commonsense knowledge, and Freebase BIBREF9 storing factoid knowledge. In this paper, we limit the scope of general knowledge to inter-word semantic connections, and thus use WordNet as our knowledge base. The existing way to solve the second problem is to encode general knowledge in vector space so that the encoding results can be used to enhance the lexical or contextual representations of words BIBREF10 , BIBREF11 . However, this is an implicit way to utilize general knowledge, since in this way we can neither understand nor control the functioning of general knowledge. In this paper, we discard the existing implicit way and instead explore an explicit (i.e. understandable and controllable) way to utilize general knowledge. The contribution of this paper is two-fold. On the one hand, we propose a data enrichment method, which uses WordNet to extract inter-word semantic connections as general knowledge from each given passage-question pair. On the other hand, we propose an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the above extracted general knowledge to assist its attention mechanisms. Based on the data enrichment method, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. When only a subset ( INLINEFORM0 – INLINEFORM1 ) of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise. Data Enrichment Method In this section, we elaborate a WordNet-based data enrichment method, which is aimed at extracting inter-word semantic connections from each passage-question pair in our MRC dataset. The extraction is performed in a controllable manner, and the extracted results are provided as general knowledge to our MRC model. Semantic Relation Chain WordNet is a lexical database of English, where words are organized into synsets according to their senses. A synset is a set of words expressing the same sense so that a word having multiple senses belongs to multiple synsets, with each synset corresponding to a sense. Synsets are further related to each other through semantic relations. According to the WordNet interface provided by NLTK BIBREF12 , there are totally sixteen types of semantic relations (e.g. hypernyms, hyponyms, holonyms, meronyms, attributes, etc.). Based on synset and semantic relation, we define a new concept: semantic relation chain. A semantic relation chain is a concatenated sequence of semantic relations, which links a synset to another synset. For example, the synset “keratin.n.01” is related to the synset “feather.n.01” through the semantic relation “substance holonym”, the synset “feather.n.01” is related to the synset “bird.n.01” through the semantic relation “part holonym”, and the synset “bird.n.01” is related to the synset “parrot.n.01” through the semantic relation “hyponym”, thus “substance holonym INLINEFORM0 part holonym INLINEFORM1 hyponym” is a semantic relation chain, which links the synset “keratin.n.01” to the synset “parrot.n.01”. We name each semantic relation in a semantic relation chain as a hop, therefore the above semantic relation chain is a 3-hop chain. By the way, each single semantic relation is equivalent to a 1-hop chain. Inter-word Semantic Connection The key problem in the data enrichment method is determining whether a word is semantically connected to another word. If so, we say that there exists an inter-word semantic connection between them. To solve this problem, we define another new concept: the extended synsets of a word. Given a word INLINEFORM0 , whose synsets are represented as a set INLINEFORM1 , we use another set INLINEFORM2 to represent its extended synsets, which includes all the synsets that are in INLINEFORM3 or that can be linked to from INLINEFORM4 through semantic relation chains. Theoretically, if there is no limitation on semantic relation chains, INLINEFORM5 will include all the synsets in WordNet, which is meaningless in most situations. Therefore, we use a hyper-parameter INLINEFORM6 to represent the permitted maximum hop count of semantic relation chains. That is to say, only the chains having no more than INLINEFORM7 hops can be used to construct INLINEFORM8 so that INLINEFORM9 becomes a function of INLINEFORM10 : INLINEFORM11 (if INLINEFORM12 , we will have INLINEFORM13 ). Based on the above statements, we formulate a heuristic rule for determining inter-word semantic connections: a word INLINEFORM14 is semantically connected to another word INLINEFORM15 if and only if INLINEFORM16 . General Knowledge Extraction Given a passage-question pair, the inter-word semantic connections that connect any word to any passage word are regarded as the general knowledge we need to extract. Considering the requirements of our MRC model, we only extract the positional information of such inter-word semantic connections. Specifically, for each word INLINEFORM0 , we extract a set INLINEFORM1 , which includes the positions of the passage words that INLINEFORM2 is semantically connected to (if INLINEFORM3 itself is a passage word, we will exclude its own position from INLINEFORM4 ). We can control the amount of the extracted results by setting the hyper-parameter INLINEFORM5 : if we set INLINEFORM6 to 0, inter-word semantic connections will only exist between synonyms; if we increase INLINEFORM7 , inter-word semantic connections will exist between more words. That is to say, by increasing INLINEFORM8 within a certain range, we can usually extract more inter-word semantic connections from a passage-question pair, and thus can provide the MRC model with more general knowledge. However, due to the complexity and diversity of natural languages, only a part of the extracted results can serve as useful general knowledge, while the rest of them are useless for the prediction of answer spans, and the proportion of the useless part always rises when INLINEFORM9 is set larger. Therefore we set INLINEFORM10 through cross validation (i.e. according to the performance of the MRC model on the development examples). Knowledge Aided Reader In this section, we elaborate our MRC model: Knowledge Aided Reader (KAR). The key components of most existing MRC models are their attention mechanisms BIBREF13 , which are aimed at fusing the associated representations of each given passage-question pair. These attention mechanisms generally fall into two categories: the first one, which we name as mutual attention, is aimed at fusing the question representations into the passage representations so as to obtain the question-aware passage representations; the second one, which we name as self attention, is aimed at fusing the question-aware passage representations into themselves so as to obtain the final passage representations. Although KAR is equipped with both categories, its most remarkable feature is that it explicitly uses the general knowledge extracted by the data enrichment method to assist its attention mechanisms. Therefore we separately name the attention mechanisms of KAR as knowledge aided mutual attention and knowledge aided self attention. Task Definition Given a passage INLINEFORM0 and a relevant question INLINEFORM1 , the task is to predict an answer span INLINEFORM2 , where INLINEFORM3 , so that the resulting subsequence INLINEFORM4 from INLINEFORM5 is an answer to INLINEFORM6 . Overall Architecture As shown in Figure FIGREF7 , KAR is an end-to-end MRC model consisting of five layers: Lexicon Embedding Layer. This layer maps the words to the lexicon embeddings. The lexicon embedding of each word is composed of its word embedding and character embedding. For each word, we use the pre-trained GloVe BIBREF14 word vector as its word embedding, and obtain its character embedding with a Convolutional Neural Network (CNN) BIBREF15 . For both the passage and the question, we pass the concatenation of the word embeddings and the character embeddings through a shared dense layer with ReLU activation, whose output dimensionality is INLINEFORM0 . Therefore we obtain the passage lexicon embeddings INLINEFORM1 and the question lexicon embeddings INLINEFORM2 . Context Embedding Layer. This layer maps the lexicon embeddings to the context embeddings. For both the passage and the question, we process the lexicon embeddings (i.e. INLINEFORM0 for the passage and INLINEFORM1 for the question) with a shared bidirectional LSTM (BiLSTM) BIBREF16 , whose hidden state dimensionality is INLINEFORM2 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the passage context embeddings INLINEFORM3 and the question context embeddings INLINEFORM4 . Coarse Memory Layer. This layer maps the context embeddings to the coarse memories. First we use knowledge aided mutual attention (introduced later) to fuse INLINEFORM0 into INLINEFORM1 , the outputs of which are represented as INLINEFORM2 . Then we process INLINEFORM3 with a BiLSTM, whose hidden state dimensionality is INLINEFORM4 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the coarse memories INLINEFORM5 , which are the question-aware passage representations. Refined Memory Layer. This layer maps the coarse memories to the refined memories. First we use knowledge aided self attention (introduced later) to fuse INLINEFORM0 into themselves, the outputs of which are represented as INLINEFORM1 . Then we process INLINEFORM2 with a BiLSTM, whose hidden state dimensionality is INLINEFORM3 . By concatenating the forward LSTM outputs and the backward LSTM outputs, we obtain the refined memories INLINEFORM4 , which are the final passage representations. Answer Span Prediction Layer. This layer predicts the answer start position and the answer end position based on the above layers. First we obtain the answer start position distribution INLINEFORM0 : INLINEFORM1 INLINEFORM2 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the refined memory of each passage word INLINEFORM4 (i.e. the INLINEFORM5 -th column in INLINEFORM6 ); INLINEFORM7 represents the question summary obtained by performing an attention pooling over INLINEFORM8 . Then we obtain the answer end position distribution INLINEFORM9 : INLINEFORM10 INLINEFORM11 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents vector concatenation. Finally we construct an answer span prediction matrix INLINEFORM4 , where INLINEFORM5 represents the upper triangular matrix of a matrix INLINEFORM6 . Therefore, for the training, we minimize INLINEFORM7 on each training example whose labeled answer span is INLINEFORM8 ; for the inference, we separately take the row index and column index of the maximum element in INLINEFORM9 as INLINEFORM10 and INLINEFORM11 . Knowledge Aided Mutual Attention As a part of the coarse memory layer, knowledge aided mutual attention is aimed at fusing the question context embeddings INLINEFORM0 into the passage context embeddings INLINEFORM1 , where the key problem is to calculate the similarity between each passage context embedding INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) and each question context embedding INLINEFORM5 (i.e. the INLINEFORM6 -th column in INLINEFORM7 ). To solve this problem, BIBREF3 proposed a similarity function: INLINEFORM8 where INLINEFORM0 is a trainable parameter; INLINEFORM1 represents element-wise multiplication. This similarity function has also been adopted by several other works BIBREF17 , BIBREF5 . However, since context embeddings contain high-level information, we believe that introducing the pre-extracted general knowledge into the calculation of such similarities will make the results more reasonable. Therefore we modify the above similarity function to the following form: INLINEFORM2 where INLINEFORM0 represents the enhanced context embedding of a word INLINEFORM1 . We use the pre-extracted general knowledge to construct the enhanced context embeddings. Specifically, for each word INLINEFORM2 , whose context embedding is INLINEFORM3 , to construct its enhanced context embedding INLINEFORM4 , first recall that we have extracted a set INLINEFORM5 , which includes the positions of the passage words that INLINEFORM6 is semantically connected to, thus by gathering the columns in INLINEFORM7 whose indexes are given by INLINEFORM8 , we obtain the matching context embeddings INLINEFORM9 . Then by constructing a INLINEFORM10 -attended summary of INLINEFORM11 , we obtain the matching vector INLINEFORM12 (if INLINEFORM13 , which makes INLINEFORM14 , we will set INLINEFORM15 ): INLINEFORM16 INLINEFORM17 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters; INLINEFORM3 represents the INLINEFORM4 -th column in INLINEFORM5 . Finally we pass the concatenation of INLINEFORM6 and INLINEFORM7 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM8 . Therefore we obtain the enhanced context embedding INLINEFORM9 . Based on the modified similarity function and the enhanced context embeddings, to perform knowledge aided mutual attention, first we construct a knowledge aided similarity matrix INLINEFORM0 , where each element INLINEFORM1 . Then following BIBREF5 , we construct the passage-attended question summaries INLINEFORM2 and the question-attended passage summaries INLINEFORM3 : INLINEFORM4 INLINEFORM5 where INLINEFORM0 represents softmax along the row dimension and INLINEFORM1 along the column dimension. Finally following BIBREF17 , we pass the concatenation of INLINEFORM2 , INLINEFORM3 , INLINEFORM4 , and INLINEFORM5 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM6 . Therefore we obtain the outputs INLINEFORM7 . Knowledge Aided Self Attention As a part of the refined memory layer, knowledge aided self attention is aimed at fusing the coarse memories INLINEFORM0 into themselves. If we simply follow the self attentions of other works BIBREF4 , BIBREF18 , BIBREF19 , BIBREF17 , then for each passage word INLINEFORM1 , we should fuse its coarse memory INLINEFORM2 (i.e. the INLINEFORM3 -th column in INLINEFORM4 ) with the coarse memories of all the other passage words. However, we believe that this is both unnecessary and distracting, since each passage word has nothing to do with many of the other passage words. Thus we use the pre-extracted general knowledge to guarantee that the fusion of coarse memories for each passage word will only involve a precise subset of the other passage words. Specifically, for each passage word INLINEFORM5 , whose coarse memory is INLINEFORM6 , to perform the fusion of coarse memories, first recall that we have extracted a set INLINEFORM7 , which includes the positions of the other passage words that INLINEFORM8 is semantically connected to, thus by gathering the columns in INLINEFORM9 whose indexes are given by INLINEFORM10 , we obtain the matching coarse memories INLINEFORM11 . Then by constructing a INLINEFORM12 -attended summary of INLINEFORM13 , we obtain the matching vector INLINEFORM14 (if INLINEFORM15 , which makes INLINEFORM16 , we will set INLINEFORM17 ): INLINEFORM18 INLINEFORM19 where INLINEFORM0 , INLINEFORM1 , and INLINEFORM2 are trainable parameters. Finally we pass the concatenation of INLINEFORM3 and INLINEFORM4 through a dense layer with ReLU activation, whose output dimensionality is INLINEFORM5 . Therefore we obtain the fusion result INLINEFORM6 , and further the outputs INLINEFORM7 . Related Works Attention Mechanisms. Besides those mentioned above, other interesting attention mechanisms include performing multi-round alignment to avoid the problems of attention redundancy and attention deficiency BIBREF20 , and using mutual attention as a skip-connector to densely connect pairwise layers BIBREF21 . Data Augmentation. It is proved that properly augmenting training examples can improve the performance of MRC models. For example, BIBREF22 trained a generative model to generate questions based on unlabeled text, which substantially boosted their performance; BIBREF5 trained a back-and-forth translation model to paraphrase training examples, which brought them a significant performance gain. Multi-step Reasoning. Inspired by the fact that human beings are capable of understanding complex documents by reading them over and over again, multi-step reasoning was proposed to better deal with difficult MRC tasks. For example, BIBREF23 used reinforcement learning to dynamically determine the number of reasoning steps; BIBREF19 fixed the number of reasoning steps, but used stochastic dropout in the output layer to avoid step bias. Linguistic Embeddings. It is both easy and effective to incorporate linguistic embeddings into the input layer of MRC models. For example, BIBREF24 and BIBREF19 used POS embeddings and NER embeddings to construct their input embeddings; BIBREF25 used structural embeddings based on parsing trees to constructed their input embeddings. Transfer Learning. Several recent breakthroughs in MRC benefit from feature-based transfer learning BIBREF26 , BIBREF27 and fine-tuning-based transfer learning BIBREF28 , BIBREF29 , which are based on certain word-level or sentence-level models pre-trained on large external corpora in certain supervised or unsupervised manners. Experimental Settings MRC Dataset. The MRC dataset used in this paper is SQuAD 1.1, which contains over INLINEFORM0 passage-question pairs and has been randomly partitioned into three parts: a training set ( INLINEFORM1 ), a development set ( INLINEFORM2 ), and a test set ( INLINEFORM3 ). Besides, we also use two of its adversarial sets, namely AddSent and AddOneSent BIBREF6 , to evaluate the robustness to noise of MRC models. The passages in the adversarial sets contain misleading sentences, which are aimed at distracting MRC models. Specifically, each passage in AddSent contains several sentences that are similar to the question but not contradictory to the answer, while each passage in AddOneSent contains a human-approved random sentence that may be unrelated to the passage. Implementation Details. We tokenize the MRC dataset with spaCy 2.0.13 BIBREF30 , manipulate WordNet 3.0 with NLTK 3.3, and implement KAR with TensorFlow 1.11.0 BIBREF31 . For the data enrichment method, we set the hyper-parameter INLINEFORM0 to 3. For the dense layers and the BiLSTMs, we set the dimensionality unit INLINEFORM1 to 600. For model optimization, we apply the Adam BIBREF32 optimizer with a learning rate of INLINEFORM2 and a mini-batch size of 32. For model evaluation, we use Exact Match (EM) and F1 score as evaluation metrics. To avoid overfitting, we apply dropout BIBREF33 to the dense layers and the BiLSTMs with a dropout rate of INLINEFORM3 . To boost the performance, we apply exponential moving average with a decay rate of INLINEFORM4 . Model Comparison in both Performance and the Robustness to Noise We compare KAR with other MRC models in both performance and the robustness to noise. Specifically, we not only evaluate the performance of KAR on the development set and the test set, but also do this on the adversarial sets. As for the comparative objects, we only consider the single MRC models that rank in the top 20 on the SQuAD 1.1 leader board and have reported their performance on the adversarial sets. There are totally five such comparative objects, which can be considered as representatives of the state-of-the-art MRC models. As shown in Table TABREF12 , on the development set and the test set, the performance of KAR is on par with that of the state-of-the-art MRC models; on the adversarial sets, KAR outperforms the state-of-the-art MRC models by a large margin. That is to say, KAR is comparable in performance with the state-of-the-art MRC models, and significantly more robust to noise than them. To verify the effectiveness of general knowledge, we first study the relationship between the amount of general knowledge and the performance of KAR. As shown in Table TABREF13 , by increasing INLINEFORM0 from 0 to 5 in the data enrichment method, the amount of general knowledge rises monotonically, but the performance of KAR first rises until INLINEFORM1 reaches 3 and then drops down. Then we conduct an ablation study by replacing the knowledge aided attention mechanisms with the mutual attention proposed by BIBREF3 and the self attention proposed by BIBREF4 separately, and find that the F1 score of KAR drops by INLINEFORM2 on the development set, INLINEFORM3 on AddSent, and INLINEFORM4 on AddOneSent. Finally we find that after only one epoch of training, KAR already achieves an EM of INLINEFORM5 and an F1 score of INLINEFORM6 on the development set, which is even better than the final performance of several strong baselines, such as DCN (EM / F1: INLINEFORM7 / INLINEFORM8 ) BIBREF36 and BiDAF (EM / F1: INLINEFORM9 / INLINEFORM10 ) BIBREF3 . The above empirical findings imply that general knowledge indeed plays an effective role in KAR. To demonstrate the advantage of our explicit way to utilize general knowledge over the existing implicit way, we compare the performance of KAR with that reported by BIBREF10 , which used an encoding-based method to utilize the general knowledge dynamically retrieved from Wikipedia and ConceptNet. Since their best model only achieved an EM of INLINEFORM0 and an F1 score of INLINEFORM1 on the development set, which is much lower than the performance of KAR, we have good reason to believe that our explicit way works better than the existing implicit way. Model Comparison in the Hunger for Data We compare KAR with other MRC models in the hunger for data. Specifically, instead of using all the training examples, we produce several training subsets (i.e. subsets of the training examples) so as to study the relationship between the proportion of the available training examples and the performance. We produce each training subset by sampling a specific number of questions from all the questions relevant to each passage. By separately sampling 1, 2, 3, and 4 questions on each passage, we obtain four training subsets, which separately contain INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , and INLINEFORM3 of the training examples. As shown in Figure FIGREF15 , with KAR, SAN (re-implemented), and QANet (re-implemented without data augmentation) trained on these training subsets, we evaluate their performance on the development set, and find that KAR performs much better than SAN and QANet. As shown in Figure FIGREF16 and Figure FIGREF17 , with the above KAR, SAN, and QANet trained on the same training subsets, we also evaluate their performance on the adversarial sets, and still find that KAR performs much better than SAN and QANet. That is to say, when only a subset of the training examples are available, KAR outperforms the state-of-the-art MRC models by a large margin, and is still reasonably robust to noise. Analysis According to the experimental results, KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. The reasons for these achievements, we believe, are as follows: Conclusion In this paper, we innovatively integrate the neural networks of MRC models with the general knowledge of human beings. Specifically, inter-word semantic connections are first extracted from each given passage-question pair by a WordNet-based data enrichment method, and then provided as general knowledge to an end-to-end MRC model named as Knowledge Aided Reader (KAR), which explicitly uses the general knowledge to assist its attention mechanisms. Experimental results show that KAR is not only comparable in performance with the state-of-the-art MRC models, but also superior to them in terms of both the hunger for data and the robustness to noise. In the future, we plan to use some larger knowledge bases, such as ConceptNet and Freebase, to improve the quality and scope of the general knowledge. Acknowledgments This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Do the authors hypothesize that humans' robustness to noise is due to their general knowledge? Only give me the answer and do not output any other words.
Answer:
[ "Yes", "Yes" ]
276
5,563
single_doc_qa
1ff968c9cf834b77b0090c21eb193aff
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: Introduction The automatic identification, extraction and representation of the information conveyed in texts is a key task nowadays. In fact, this research topic is increasing its relevance with the exponential growth of social networks and the need to have tools that are able to automatically process them BIBREF0. Some of the domains where it is more important to be able to perform this kind of action are the juridical and legal ones. Effectively, it is crucial to have the capability to analyse open access text sources, like social nets (Twitter and Facebook, for instance), blogs, online newspapers, and to be able to extract the relevant information and represent it in a knowledge base, allowing posterior inferences and reasoning. In the context of this work, we will present results of the R&D project Agatha, where we developed a pipeline of processes that analyses texts (in Portuguese, Spanish, or English) and is able to populate a specialized ontology BIBREF1 (related to criminal law) for the representation of events, depicted in such texts. Events are represented by objects having associated actions, agents, elements, places and time. After having populated the event ontology, we have an automatic process linking the identified entities to external referents, creating, this way, a linked data knowledge base. It is important to point out that, having the text information represented in an ontology allows us to perform complex queries and inferences, which can detect patterns of typical criminal actions. Another axe of innovation in this research is the development, for the Portuguese language, of a pipeline of Natural Language Processing (NLP) processes, that allows us to fully process sentences and represent their content in an ontology. Although there are several tools for the processing of the Portuguese language, the combination of all these steps in a integrated tool is a new contribution. Moreover, we have already explored other related research path, namely author profiling BIBREF2, aggression identification BIBREF3 and hate-speech detection BIBREF4 over social media, plus statute law retrieval and entailment for Japanese BIBREF5. The remainder of this paper is organized as follows: Section SECREF2 describes our proposed architecture together with the Portuguese modules for its computational processing. Section SECREF3 discusses different design options and Section SECREF4 provides our conclusions together with some pointers for future work. Framework for Processing Portuguese Text The framework for processing Portuguese texts is depicted in Fig. FIGREF2, which illustrates how relevant pieces of information are extracted from the text. Namely, input files (Portuguese texts) go through a series of modules: part-of-speech tagging, named entity recognition, dependency parsing, semantic role labeling, subject-verb-object triple extraction, and lexicon matching. The main goal of all the modules except lexicon matching is to identify events given in the text. These events are then used to populate an ontology. The lexicon matching, on the other hand, was created to link words that are found in the text source with the data available not only on Eurovoc BIBREF6 thesaurus but also on the EU's terminology database IATE BIBREF7 (see Section SECREF12 for details). Most of these modules are deeply related and are detailed in the subsequent subsections. Framework for Processing Portuguese Text ::: Part-Of-Speech Tagging Part-of-speech tagging happens after language detection. It labels each word with a tag that indicates its syntactic role in the sentence. For instance, a word could be a noun, verb, adjective or adverb (or other syntactic tag). We used Freeling BIBREF8 library to provide the tags. This library resorts to a Hidden Markov Model as described by Brants BIBREF9. The end result is a tag for each word as described by the EAGLES tagset . Framework for Processing Portuguese Text ::: Named Entity Recognition We use the named entity recognition module after part-of-speech tagging. This module labels each part of the sentence into different categories such as "PERSON", "LOCATION", or "ORGANIZATION". We also used Freeling to label the named entities and the details of the algorithm are shown in the paper by Carreras et al BIBREF10. Aside from the three aforementioned categories, we also extracted "DATE/TIME" and "CURRENCY" values by looking at the part-of-speech tags: date/time words have a tag of "W", while currencies have "Zm". Framework for Processing Portuguese Text ::: Dependency Parsing Dependency parsing involves tagging a word based on different features to indicate if it is dependent on another word. The Freeling library also has dependency parsing models for Portuguese. Since we wanted to build a SRL (Semantic Role Labeling) module on top of the dependency parser and the current released version of Freeling does not have an SRL module for Portuguese, we trained a different Portuguese dependency parsing model that was compatible (in terms of used tags) with the available annotated. We used the dataset from System-T BIBREF11, which has SRL tags, as well as, the other preceding tags. It was necessary to do some pre-processing and tag mapping in order to make it viable to train a Portuguese model. We made 589 tag conversions over 14 different categories. The breakdown of tag conversions per category is given by table TABREF7. These rules can be further seen in the corresponding Github repository BIBREF12. The modified training and development datasets are also available on another Github repositorypage BIBREF13 for further research and comparison purposes. Framework for Processing Portuguese Text ::: Semantic Role Labeling We execute the SRL (Semantic Role Labeling) module after obtaining the word dependencies. This module aims at giving a semantic role to a syntactic constituent of a sentence. The semantic role is always in relation to a verb and these roles could either be an actor, object, time, or location, which are then tagged as A0, A1, AM-TMP, AM-LOC, respectively. We trained a model for this module on top of the dependency parser described in the previous subsection using the modified dataset from System-T. The module also needs co-reference resolution to work and, to achieve this, we adapted the Spanish co-reference modules for Portuguese, changing the words that are equivalent (in total, we changed 253 words). Framework for Processing Portuguese Text ::: SVO Extraction From the yield of the SRL (Semantic Role Labeling) module, our framework can distinguish actors, actions, places, time and objects from the sentences. Utilizing this extracted data, we can distinguish subject-verb-object (SVO) triples using the SVO extraction algorithm BIBREF14. The algorithm finds, for each sentence, the verb and the tuples related to that verb using Semantic Role Labeling (subsection SECREF8). After the extraction of SVOs from texts, they are inserted into a specific event ontology (see section SECREF12 for the creation of a knowledge base). Framework for Processing Portuguese Text ::: Lexicon Matching The sole purpose of this module is to find important terms and/or concepts from the extracted text. To do this, we use Euvovoc BIBREF6, a multilingual thesaurus that was developed for and by the European Union. The Euvovoc has 21 fields and each field is further divided into a variable number of micro-thesauri. Here, due to the application of this work in the Agatha project (mentioned in Section SECREF1), we use the terms of the criminal law BIBREF15 micro-thesaurus. Further, we classified each term of the criminal law micro-thesaurus into four categories namely, actor, event, place and object. The term classification can be seen in Table TABREF11. After the classification of these terms, we implemented two different matching algorithms between the extracted words and the criminal law micro-thesaurus terms. The first is an exact string match wherein lowercase equivalents of the words of the input sentences are matched exactly with lower case equivalents of the predefined terms. The second matching algorithm uses Levenshtein distance BIBREF16, allowing some near-matches that are close enough to the target term. Framework for Processing Portuguese Text ::: Linked Data: Ontology, Thesaurus and Terminology In the computer science field, an ontology can be defined has: a formal specification of a conceptualization; shared vocabulary and taxonomy which models a domain with the definition of objects and/or concepts and their properties and relations; the representation of entities, ideas, and events, along with their properties and relations, according to a system of categories. A knowledge base is one kind of repository typically used to store answers to questions or solutions to problems enabling rapid search, retrieval, and reuse, either by an ontology or directly by those requesting support. For a more detailed description of ontologies and knowledge bases, see for instance BIBREF17. For designing the ontology adequate for our goals, we referred to the Simple Event Model (SEM) BIBREF18 as a baseline model. A pictorial representation of this ontology is given in Figure FIGREF16 Considering the criminal law domain case study, we made a few changes to the original SEM ontology. The entities of the model are: Actor: person involved with event Place: location of the event Time: time of the event Object: that actor act upon Organization: organization involved with event Currency: money involved with event The proposed ontology was designed in such a manner that it can incorporate information extracted from multiple documents. In this context, suppose that the source of documents is aare a legal police department, where each document isare under the hood of a particular case/crime; furthermoreFurther, a single case can have documents from multiple languages. Now, considering case 1 has 100 documents and case 2 has 100 documents then there is not only a connection among the documents of a single case but rather among all the cases with all the combined 200 documents. In this way, the proposed method is able to produce a detailed and well-connected knowledge base. Figure FIGREF23 shows the proposed ontology, which, in our evaluation procedure, was populated with 3121 events entries from 51 documents. Protege BIBREF19 tool was used for creating the ontology and GraphDB BIBREF20 for populating & querying the data. GraphDB is an enterprise-ready Semantic Graph Database, compliant with W3C Standards. Semantic Graph Databases (also called RDF triplestores) provide the core infrastructure for solutions where modeling agility, data integration, relationship exploration, and cross-enterprise data publishing and consumption are important. GraphDB has a SPARQL (SQL-like query language) interface for RDF graph databases with the following types: SELECT: returns tabular results CONSTRUCT: creates a new RDF graph based on query results ASK: returns "YES", if the query has a solution, otherwise "NO" DESCRIBE: returns RDF data about a resource. This is useful when the RDF data structure in the data source is not known INSERT: inserts triples into a graph DELETE: deletes triples from a graph Furthermore, we have extended the ontology BIBREF21 to connect the extracted terms with Eurovoc criminal law (discussed in subsection SECREF10) and IATE BIBREF7 terms. IATE (Interactive Terminology for Europe) is the EU's general terminology database and its aim is to provide a web-based infrastructure for all EU terminology resources, enhancing the availability and standardization of the information. The extended ontology has a number of sub-classes for Actor, Event, Object and Place classes detailed in Table TABREF30. Discussion We have defined a major design principle for our architecture: it should be modular and not rely on human made rules allowing, as much as possible, its independence from a specific language. In this way, its potential application to another language would be easier, simply by changing the modules or the models of specific modules. In fact, we have explored the use of already existing modules and adopted and integrated several of these tools into our pipeline. It is important to point out that, as far as we know, there is no integrated architecture supporting the full processing pipeline for the Portuguese language. We evaluated several systems like Rembrandt BIBREF22 or LinguaKit: the former only has the initial steps of our proposal (until NER) and the later performed worse than our system. This framework, developed within the context of the Agatha project (described in Section SECREF1) has the full processing pipeline for Portuguese texts: it receives sentences as input and outputs ontological information: a) first performs all NLP typical tasks until semantic role labelling; b) then, it extracts subject-verb-object triples; c) and, then, it performs ontology matching procedures. As a final result, the obtained output is inserted into a specialized ontology. We are aware that each of the architecture modules can, and should, be improved but our main goal was the creation of a full working text processing pipeline for the Portuguese language. Conclusions and Future Work Besides the end–to–end NLP pipeline for the Portuguese language, the other main contributions of this work can be summarize as follows: Development of an ontology for the criminal law domain; Alignment of the Eurovoc thesaurus and IATE terminology with the ontology created; Representation of the extracted events from texts in the linked knowledge base defined. The obtained results support our claim that the proposed system can be used as a base tool for information extraction for the Portuguese language. Being composed by several modules, each of them with a high level of complexity, it is certain that our approach can be improved and an overall better performance can be achieved. As future work we intend, not only to continue improving the individual modules, but also plan to extend this work to the: automatic creation of event timelines; incorporation in the knowledge base of information obtained from videos or pictures describing scenes relevant to criminal investigations. Acknowledgments The authors would like to thank COMPETE 2020, PORTUGAL 2020 Program, the European Union, and ALENTEJO 2020 for supporting this research as part of Agatha Project SI & IDT number 18022 (Intelligent analysis system of open of sources information for surveillance/crime control). The authors would also like to thank LISP - Laboratory of Informatics, Systems and Parallelism. Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: Were any of the pipeline components based on deep learning models? Only give me the answer and do not output any other words.
Answer:
[ "No", "No" ]
276
2,944
single_doc_qa
1197611f86cb48a48d2af9545bdf1c53
Answer the question based on the given passages. Only give me the answer and do not output any other words. The following are given passages and these passages are from many different fields. Passage 1: \section{Introduction} Underwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\protect\footnote{Underwater Robot Professional Contest: {\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest. Therefore, researchers \cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways, \begin{table}[t] \renewcommand\tabcolsep{3.5pt} \caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \emph{3} in Class means three types of creatures are labeled, \emph{i.e.,} holothurian, echinus, and scallop. \emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.} \centering \begin{tabular}{|l|c|c|c|c|c|} \hline Dataset&Train&Test&Class&Retention&Year \\ \hline URPC2017&17,655&985*&3&15\%&2017 \\ \hline URPC2018&2,901&800*&4&99\%&2018 \\ \hline URPC2019&4,757&1,029*&4&86\%&2019 \\ \hline URPC2020$_{ZJ}$&5,543&2,000*&4&82\%&2020 \\ \hline URPC2020$_{DL}$&6,575&2,400*&4&80\%&2020 \\ \hline UDD&1,827&400&3&84\%&2020 \\ \hline \end{tabular} \label{Info} \end{table} \begin{figure*}[htbp] \begin{center} \includegraphics[width=1\linewidth]{example.pdf} \end{center} \caption{Examples in DUO, which show a variety of scenarios in underwater environments.} \label{exam} \end{figure*} causing there is no unified benchmark to compare the performance of different algorithms. In terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance. For other URPC datasets, the latter also includes images from the former, \emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness. Therefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development. To address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\emph{i.e.,} holothurian, echinus, scallop, and starfish). Besides, based on the MMDetection$\protect\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\bf https://github.com/open-mmlab/mmdetection}}$ \cite{chen2019mmdetection} framework, we also provide a \emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\protect\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon. In summary, the contributions of this paper can be listed as follows. $\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes. $\bullet$ We provide a corresponding benchmark of \emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications. \pagestyle{empty} \section{Background} In the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\protect\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping. The datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \ref{Info}. {\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical. {\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\times$480, 704$\times$576, 720$\times$405, and 1,920$\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment. {\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests. {\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\times$2,160. The test set's annotations are also not available. {\bf UDD \cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{pie.pdf} \end{center} \caption{The proportion distribution of the objects in DUO.} \label{pie} \end{figure} \begin{figure*} \centering \subfigure[]{\includegraphics[width=3.45in]{imagesize.pdf}} \subfigure[]{\includegraphics[width=3.45in]{numInstance.pdf}} \caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.} \label{sum} \end{figure*} \section{Proposed Dataset} \subsection{Image Deduplicating} As we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario. After deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\%, which means that there are only a few similar images in the new dataset. Figure \ref{exam} shows that our dataset also retains various underwater scenes. \subsection{Image Re-annotation} Due to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\emph{i.e.,} GFL \cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\bf the fine annotation}. Notably, we adopt the COCO \cite{Belongie2014} annotation form as the final format. \subsection{Dataset Statistics} {\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities. {\bf The distribution of instance sizes}: Figure \ref{sum}(a) shows an instance size distribution of DUO. \emph{Percent of image size} represents the ratio of object area to image area, and \emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\% to 1.5\% of the image area. {\bf The instance number per image}: Figure \ref{sum}(b) illustrates the number of categories per image for DUO. \emph{Number of instances} represents the number of objects one image has, and \emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image. {\bf Summary}: In general, smaller objects are harder to detect. For PASCAL VOC \cite{Everingham2007The} or COCO \cite{Belongie2014}, roughly 50\% of all objects occupy no more than 10\% of the image itself, and others evenly occupy from 10\% to 100\%. In the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\% of the image size. Therefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking. \section{Benchmark} Because the aim of underwater object detection for robot picking is to find {\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \ref{ben}. \subsection{Evaluation Metrics} Here we adopt the standard COCO metrics (mean average precision, \emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution. {\bf AP} -- mAP at IoU=0.50:0.05:0.95. {\bf AP$_{50}$} -- mAP at IoU=0.50. {\bf AP$_{75}$} -- mAP at IoU=0.75. {\bf AP$_{S}$} -- {\bf AP} for small objects of area smaller than 32$^{2}$. {\bf AP$_{M}$} -- {\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$. {\bf AP$_{L}$} -- {\bf AP} for large objects of area bigger than 96$^{2}$. {\bf AP$_{Ho}$} -- {\bf AP} in holothurian. {\bf AP$_{Ec}$} -- {\bf AP} in echinus. {\bf AP$_{Sc}$} -- {\bf AP} in scallop. {\bf AP$_{St}$} -- {\bf AP} in starfish. For the efficiency evaluation, we provide three metrics: {\bf Param.} -- The parameters of a detector. {\bf FLOPs} -- Floating-point operations per second. {\bf FPS} -- Frames per second. Notably, {\bf FLOPs} is calculated under the 512$\times$512 input image size and {\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\_$30W$\_$ALL. \subsection{Standard Training Configuration} We follow a widely used open-source toolbox, \emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows: $\bullet$ We initialize the backbone models (\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \cite{Deng2009ImageNet}. $\bullet$ We resize each image into 512 $\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training. $\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively. $\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs. $\bullet$ Testing time augmentation (\emph{i.e.,} flipping test or multi-scale testing) is not employed. \subsection{Benchmark Analysis} Table \ref{ben} shows the benchmark for the \emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency. In general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency. \begin{table*}[htbp] \renewcommand\tabcolsep{3.0pt} \begin{center} \caption{Benchmark of \emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.} \label{ben} \begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|} \hline Method&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\ \hline \emph{multi-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{Faster R-CNN \cite{Ren2015Faster}} &R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\ &R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\ &R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\ \hline \multirow{3}{*}{Cascade R-CNN \cite{Cai_2019}} &R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\ &R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\ &R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\ \hline \multirow{3}{*}{Grid R-CNN \cite{lu2019grid}} &R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\ &R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\ &R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\ \hline \multirow{3}{*}{RepPoints \cite{yang2019reppoints}} &R-18&20.11M&\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\ &R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\ &R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\ \hline \hline \emph{one-stage:} &&&&&&&&&&&&&& \\ \multirow{3}{*}{RetinaNet \cite{Lin2017Focal}} &R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\ &R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\ &R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\ \hline \multirow{3}{*}{FreeAnchor \cite{2019arXiv190902466Z}} &R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\ &R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\ &R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\ \hline \multirow{3}{*}{FoveaBox \cite{DBLP:journals/corr/abs-1904-03797}} &R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\ &R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\ &R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\ \hline \multirow{3}{*}{PAA \cite{2020arXiv200708103K}} &R-18&\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\ &R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\ &R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\ \hline \multirow{3}{*}{FSAF \cite{zhu2019feature}} &R-18&19.53M&38.88G&\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\ &R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\ &R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\ \hline \multirow{3}{*}{FCOS \cite{DBLP:journals/corr/abs-1904-01355}} &R-18&\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\ &R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\ &R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\ \hline \multirow{3}{*}{ATSS \cite{zhang2019bridging}} &R-18&\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\ &R-50&31.89M&51.55G&5.2&58.2&\bf 80.1&66.5&43.9&60.6&55.9&\bf 58.6&67.6&41.8&64.6\\ &R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\ \hline \multirow{3}{*}{GFL \cite{li2020generalized}} &R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\ &R-50&32.04M&52.35G&5.5&\bf 58.6&79.3&\bf 66.7&46.5&\bf 61.6&55.6&\bf 58.6&\bf 69.1&41.3&\bf 65.3\\ &R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\bf 56.3&57.0&\bf 69.1&\bf 43.0&64.0\\ \hline \end{tabular} \end{center} \end{table*} Therefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios. Consequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance. In order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch. \section{Conclusion} In this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development. \bibliographystyle{IEEEbib} Answer the question based on the given passages following the instruction: Answer the question related with Passage 1.
Question: What are the datasets used in this community for research? Only give me the answer and do not output any other words.
Answer:
[ "URPC2017, URPC2018, URPC2019, URPC2020_ZJ and URPC2020_DL." ]
276
7,001
single_doc_qa
47e428a801b0401daf28ef32d603ac80
"Answer the question based on the given passages. Only give me the answer and do not output any othe(...TRUNCATED)
"Question: What are the languages they use in their experiment?\nOnly give me the answer and do not (...TRUNCATED)
Answer:
["English\nFrench\nSpanish\nGerman\nGreek\nBulgarian\nRussian\nTurkish\nArabic\nVietnamese\nThai\nCh(...TRUNCATED)
276
5,414
single_doc_qa
b6e15b89fc5846d69cc8cf0a4648ecd2
"Answer the question based on the given passages. Only give me the answer and do not output any othe(...TRUNCATED)
"Question: What factors control the reliance of artificial organisms on plasticity?\nOnly give me th(...TRUNCATED)
Answer:
["Environmental fluctuation and uncertainty control the reliance of artificial organisms on plastici(...TRUNCATED)
276
6,320
single_doc_qa
5a703fabd22d4c8b846cf8b8e56b33f9
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
56